project
stringlengths 1
235
| source
stringclasses 16
values | language
stringclasses 48
values | content
stringlengths 909
64.8M
|
---|---|---|---|
fastSOM | cran | R | Package ‘fastSOM’
October 13, 2022
Type Package
Version 1.0.1
Date 2019-11-19
Title Fast Calculation of Spillover Measures
Imports parallel
Description Functions for computing spillover measures, especially spillover
tables and spillover indices, as well as their average, minimal, and maximal
values.
License GPL (>= 2)
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2019-11-19 12:40:06 UTC
R topics documented:
fastSOM-packag... 2
so... 2
soi_avg_es... 4
soi_avg_exac... 5
soi_from_so... 7
so... 8
sot_avg_es... 10
sot_avg_exac... 11
fastSOM-package Fast Calculation of Spillover Measures
Description
This package comprises various functions for computing spillover measures, especially spillover
tables and spillover indices as proposed by Diebold and Yilmaz (2009) as well as their estimated
and exact average, minimal, and maximal values.
Details
Package: fastSOM
Type: Package
Version: 1.0.0
Date: 2016-07-20
License: GPL (>=2)
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] Diebold, <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
soi Calculation of the Spillover Index
Description
This function calculates the spillover index as proposed by Diebold and Yilmaz (2009, see Refer-
ences).
Usage
soi(Sigma, A, ncores = 1, ...)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores, only relevant if Sigma is a list of matrices. Missing ncores or
ncores=1 means no parallelization (just one core is used). ncores=0 means au-
tomatic detection of the number of available cores. Any other integer determines
the maximal number of cores to be used.
... Further arguments, especially perm which is used to reorder variables. If perm is
missing, then the original ordering of the model variables will be used. If perm
is a permutation of 1:N, then the spillover index for the model with variables
reordered according to perm will be calculated.
Details
The spillover index was introduced by Diebold and Yilmaz in 2009 (see References). It is based on
a variance decompostion of the forecast error variances of an N -dimensional MA(∞) process. The
underlying idea is to decompose the forecast error of each variable into own variance shares and
cross variance shares. The latter are interpreted as contributions of shocks of one variable to the
error variance in forecasting another variable (see also sot). The spillover index then is a number
between 0 and 100, describing the relative amount of forecast error variances that can be explained
by shocks coming from other variables in the model.
The typical application of the ’list’ version of soi is a rolling windows approach when Sigma and
A are lists representing the corresponding quantities at different points in time (rolling windows).
Value
Returns a single numeric value or a list thereof.
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, sot
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate the spillover index
soi(Sigma, A)
soi_avg_est Estimation of Average, Minimal, and Maximal Spillover Index
Description
Calculates an estimate of the average, the minimum, and the maximum spillover index based on
different permutations.
Usage
soi_avg_est(Sigma, A, ncores = 1, ...)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores. Missing ncores or ncores=1 means no parallelization (just
one core is used). ncores=0 means automatic detection of the number of avail-
able cores. Any other integer determines the maximal number of cores to be
used.
... Further arguments, especially perms which is used to reorder variables. If perms
is missing, then 10.000 randomly created permutations of 1:N will be used as
reorderings of the model variables. If perms is defined, it has to be either a
matrix with each column being a permutation of 1:N, or, alternatively, an integer
value defining the number of randomly created permutations.
Details
The spillover index introduced by Diebold and Yilmaz (2009) (see References) depends on the or-
dering of the model variables. While soi_avg_exact provides a fast algorithm for exact calculation
of average, minimum, and maximum of the spillover index over all permutations, there might be
reasons to prefer to estimate these quantities using a limited number of permutations (mainly to
save time when N is large). This is exactly what soi_avg_est does.
The typical application of the ’list’ version of soi_avg_est is a rolling windows approach when
Sigma and A are lists representing the corresponding quantities at different points in time (rolling
windows).
Value
The ’single’ version returns a list containing the estimated average, minimal, and maximal spillover
index as well as permutations that generated the minimal and maximal value. The ’list’ version
returns a list consisting of three vectors (the average, minimal, and maximal spillover index values)
and two matrices (the columns of which are the permutations generating the minima and maxima).
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] Kloessner, S. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, soi_avg_exact
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate estimates of the average, minimal,
# and maximal spillover index and determine the corresponding ordering
# of the model variables
soi_avg_est(Sigma, A)
soi_avg_exact Exact Calculation of Average, Minimal, and Maximal Spillover Index
Description
Calculates the Average, Minimal, and Maximal Spillover Index exactly.
Usage
soi_avg_exact(Sigma, A, ncores = 1)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores. Missing ncores or ncores=1 means no parallelization (just
one core is used). ncores=0 means automatic detection of the number of avail-
able cores. Any other integer determines the maximal number of cores to be
used.
Details
The spillover index introduced by Diebold and Yilmaz (2009) (see References) depends on the
ordering of the model variables. While soi_avg_est provides an algorithm to estimate aver-
age, minimum, and maximum of the spillover index over all permutations, soi_avg_est calcu-
lates these quantities exactly. Notice, however, that for large dimensions N , this might be quite
time- as well as memory-consuming. If only the exact average of the spillover index is wanted,
soi_from_sot(sot_avg_exact(Sigma,A,ncores)$Average) should be used.
The typical application of the ’list’ version of soi_avg_exact is a rolling windows approach when
Sigma and A are lists representing the corresponding quantities at different points in time (rolling
windows).
Value
The ’single’ version returns a list containing the exact average, minimal, and maximal spillover
index as well as permutations that generated the minimal and maximal value. The ’list’ version
returns a list consisting of three vectors (the average, minimal, and maximal spillover index values)
and two matrices (the columns of which are the permutations generating the minima and maxima).
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, soi_avg_est
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate the exact average, minimal,
# and maximal spillover index and determine the corresponding ordering
# of the model variables
soi_avg_exact(Sigma, A)
soi_from_sot Calculation of the Spillover Index for a given Spillover Table
Description
Given a spillover table, this function calculates the corresponding spillover index.
Usage
soi_from_sot(input_table)
Arguments
input_table Either a spillover table or a list thereof
Details
The spillover index was introduced by Diebold and Yilmaz in 2009 (see References). It is based on
a variance decompostion of the forecast error variances of an N -dimensional MA(∞) process. The
underlying idea is to decompose the forecast error of each variable into own variance shares and
cross variance shares. The latter are interpreted as contributions of shocks of one variable to the
error variance in forecasting another variable (see also sot). The spillover index then is a number
between 0 and 100, describing the relative amount of forecast error variances that can be explained
by shocks coming from other variables in the model.
The typical application of the ’list’ version of soi_from_sot is a rolling windows approach when
input_table is a list representing the corresponding spillover tables at different points in time
(rolling windows).
Value
Numeric value or a list thereof.
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, soi, sot
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate spillover table
SOT <- sot(Sigma,A)
# calculate spillover index from spillover table
soi_from_sot(SOT)
sot Calculation of Spillover Tables
Description
This function calculates an N xN -dimensional spillover table.
Usage
sot(Sigma, A, ncores = 1, ...)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores, only relevant if Sigma is a list of matrices. Missing ncores or
ncores=1 means no parallelization (just one core is used). ncores=0 means au-
tomatic detection of the number of available cores. Any other integer determines
the maximal number of cores to be used.
... Further arguments, especially perm which is used to reorder variables. If perm is
missing, then the original ordering of the model variables will be used. If perm
is a permutation of 1:N, then the spillover index for the model with variables
reordered according to perm will be calculated.
Details
The (i, j)-entry of a spillover table represents the relative contribution of shocks in variable j (the
column variable) to the forecasting error variance of variable i (the row variable). Hence, off-
diagonal values are interpreted as spillovers, while the own variance shares appear on the diagonal.
An overall spillover measure is given by soi.
The typical application of the ’list’ version of sot is a rolling windows approach when Sigma and
A are lists representing the corresponding quantities at different points in time (rolling windows).
Value
Matrix, or a list thereof, of dimensions N xN with non-negative entries summing up to 100 for each
row.
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, soi
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate spillover table
sot(Sigma,A)
sot_avg_est Estimation of the Average, Minimal, and Maximal Entries of a
Spillover Table
Description
Calculates estimates of the average, minimal, and maximal entries of a spillover.
Usage
sot_avg_est(Sigma, A, ncores = 1, ...)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores. Missing ncores or ncores=1 means no parallelization (just
one core is used). ncores=0 means automatic detection of the number of avail-
able cores. Any other integer determines the maximal number of cores to be
used.
... Further arguments, especially perms which is used to reorder variables. If perms
is missing, then 10.000 randomly created permutations of 1:N will be used as
reorderings of the model variables. If perms is defined, it has to be either a
matrix with each column being a permutation of 1:N, or, alternatively, an integer
value defining the number of randomly created permutations.
Details
The spillover tables introduced by Diebold and Yilmaz (2009) (see References) depend on the or-
dering of the model variables. While sot_avg_exact provides a fast algorithm for exact calculation
of average, minimum, and maximum of the spillover table over all permutations, there might be rea-
sons to prefer to estimate these quantities using a limited number of permutations (mainly to save
time when N is large). This is exactly what sot_avg_est does.
The typical application of the ’list’ version of sot_avg_est is a rolling windows approach when
Sigma and A are lists representing the corresponding quantities at different points in time (rolling
windows).
Value
The ’single’ version returns a list containing the exact average, minimal, and maximal values for the
spillover table. The ’list’ version returns a list with three elements (Average, Minimum, Maximum)
which themselves are lists of the corresponding tables.
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, sot_avg_exact
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate estimates of the average, minimal,
# and maximal entries within a spillover table
sot_avg_est(Sigma, A)
sot_avg_exact Calculation of the Exact Values for Average, Minimal, and Maximal
Entries of a Spillover Table
Description
Calculates the exact values of the average, the minimum, and the maximum entries of a spillover
tables based on different permutations.
Usage
sot_avg_exact(Sigma, A, ncores = 1)
Arguments
Sigma Either a covariance matrix or a list thereof.
A Either a 3-dimensional array with A[„h] being MA coefficient matrices of the
same dimension as Sigma or a list thereof.
ncores Number of cores, only relevant for ’list’ version. In this case, missing ncores or
ncores=1 means no parallelization (just one core is used), ncores=0 means au-
tomatic detection of the number of available cores, any other integer determines
the maximal number of cores to be used.
Details
The spillover tables introduced by Diebold and Yilmaz (2009) (see References) depend on the
ordering of the model variables. While sot_avg_est provides an algorithm to estimate average,
minimal, and maximal values of the spillover table over all permutations, sot_avg_est calculates
these quantities exactly. Notice, however, that for large dimensions N , this might be quite time- as
well as memory-consuming.
The typical application of the ’list’ version of sot_avg_exact is a rolling windows approach when
Sigma and A are lists representing the corresponding quantities at different points in time (rolling
windows).
Value
The ’single’ version returns a list containing the exact average, minimal, and maximal values for the
spillover table. The ’list’ version returns a list with three elements (Average, Minimum, Maximum)
which themselves are lists of the corresponding tables.
Author(s)
<NAME> (<<EMAIL>>),
with contributions by <NAME> (<<EMAIL>>)
References
[1] <NAME>. and <NAME>. (2009): Measuring financial asset return and volatitliy spillovers,
with application to global equity markets, Economic Journal 199(534): 158-171.
[2] <NAME>. and <NAME>. (2012): Exploring All VAR Orderings for Calculating Spillovers?
Yes, We Can! - A Note on Diebold and Yilmaz (2009), Journal of Applied Econometrics 29(1):
172-179
See Also
fastSOM-package, sot_avg_est
Examples
# generate randomly positive definite matrix Sigma of dimension N
N <- 10
Sigma <- crossprod(matrix(rnorm(N*N),nrow=N))
# generate randomly coefficient matrices
H <- 10
A <- array(rnorm(N*N*H),dim=c(N,N,H))
# calculate the exact average, minimal,
# and maximal entries within a spillover table
sot_avg_exact(Sigma, A) |
shadow | ctan | TeX | # The shadow package+
Footnote †: This manual corresponds to shadow.sty v1.3, dated 19 February 2003.
<NAME>
<EMAIL>
19 February 2003
The command \shabox has the same meaning of the LaTeX command \fbox except for the fact that a "shadow" is added to the bottom and the right side of the box. It computes the right dimension of the box, even if the text spans over more than one line; in this case a warning message is given.
There are three parameters governing:
1. the width of the lines delimiting the box: \sboxrule
2. the separation between the edge of the box and its contents: \sboxsep
3. the dimension of the shadow: \sdim
**Sintax:**
\shabox{\(\{\)text\(\}\)}
where \(\{\)text\(\}\) is the text to be put in the framed box. It can be an entire paragraph.
Adapted from the file dropshadow.tex by <EMAIL>.
1.1 Works in a double column environment.
2. When there is an online shadow box, it will be centered on the line (in V1.1 the box was aligned with the baseline). (Courtesy by <NAME>)"
3. Added a number of missing % signs no other cleanup done (FMi) |
currr | cran | R | Package ‘currr’
February 17, 2023
Title Apply Mapping Functions in Frequent Saving
Version 0.1.2
Description Implementations of the family of map() functions with frequent saving of the intermedi-
ate results. The contained functions let you start the evaluation of the itera-
tions where you stopped (reading the already evaluated ones from cache), and work with the cur-
rently evaluated iterations while remaining ones are running in a background job. Parallel com-
puting is also easier with the workers parameter.
License MIT + file LICENSE
URL https://github.com/MarcellGranat/currr
BugReports https://github.com/MarcellGranat/currr/issues
Depends R (>= 4.1.0)
Imports dplyr, tidyr, readr, stringr, broom, pacman, tibble,
clisymbols, job, rstudioapi, scales, parallel, purrr, crayon,
stats
Encoding UTF-8
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-4036-1500>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-02-17 12:20:20 UTC
R topics documented:
cp_ma... 2
cp_map_ch... 4
cp_map_db... 5
cp_map_df... 7
cp_map_df... 9
cp_map_lg... 11
remove_currr_cach... 12
saving_ma... 13
saving_map_nodo... 14
cp_map Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A list.
See Also
Other map variants: cp_map_chr(), cp_map_dbl(), cp_map_dfc(), cp_map_dfr(), cp_map_lgl()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = 2, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = 2, name = "iris_mean")
remove_currr_cache()
cp_map_chr Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map_chr(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A character vector.
See Also
Other map variants: cp_map_dbl(), cp_map_dfc(), cp_map_dfr(), cp_map_lgl(), cp_map()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
remove_currr_cache()
cp_map_dbl Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map_dbl(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A numeric vector.
See Also
Other map variants: cp_map_chr(), cp_map_dfc(), cp_map_dfr(), cp_map_lgl(), cp_map()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
remove_currr_cache()
cp_map_dfc Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map_dfc(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A tibble.
See Also
Other map variants: cp_map_chr(), cp_map_dbl(), cp_map_dfr(), cp_map_lgl(), cp_map()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
remove_currr_cache()
cp_map_dfr Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map_dfr(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A tibble.
See Also
Other map variants: cp_map_chr(), cp_map_dbl(), cp_map_dfc(), cp_map_lgl(), cp_map()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
remove_currr_cache()
cp_map_lgl Wrapper function of purrr::map. Apply a function to each element
of a vector, but save the intermediate data after a given number of
iterations.
Description
The map functions transform their input by applying a function to each element of a list or atomic
vector and returning an object of the same length as the input. cp_map functions work exactly the
same way, but creates a secret folder in your current working directory and saves the results if they
reach a given checkpoint. This way if you rerun the code, it reads the result from the cache folder
and start to evalutate where you finished.
• cp_map() always returns a list.
• map_lgl(), map_dbl() and map_chr() return an atomic vector of the indicated type (or die
trying). For these functions, .f must return a length-1 vector of the appropriate type.
Usage
cp_map_lgl(.x, .f, ..., name = NULL, cp_options = list())
Arguments
.x A list or atomic vector.
.f A function, specified in one of the following ways:
• A named function, e.g. mean.
• An anonymous function, e.g. \(x) x + 1 or function(x) x + 1.
• A formula, e.g. ~ .x + 1. You must use .x to refer to the first argument.
Only recommended if you require backward compatibility with older ver-
sions of R.
... Additional arguments passed on to the mapped function.
name Name for the subfolder in the cache folder. If you do not specify, then cp_map
uses the name of the function combined with the name of x. This is dangerous,
since this generated name can appear multiple times in your code. Also changing
x will result a rerun of the code, however you max want to avoid this. (if a subset
of .x matches with the cached one and the function is the same, then elements
of this subset won’t evaluated, rather read from the cache)
cp_options Options for the evaluation: wait, n_checkpoint, workers, fill.
• wait: An integer to specify that after how many iterations the console
shows the intermediate results (default 1). If its value is between 0 and
1, then it is taken as proportions of iterations to wait (example length of .x
equals 100, then you get back the result after 50 if you set it to 0.5). Set
to Inf to get back the results only after full evaluations. If its value is not
equal to Inf then evaluation is goind in background job.
• n_chekpoint: Number of checkpoints, when intermadiate results are saved
(default = 100).
• workers: Number of CPU cores to use (parallel package called in back-
ground). Set to 1 (default) to avoid parallel computing.
• fill() When you get back a not fully evaluated result (default TRUE).
Should the length of the result be the same as .x?
You can set these options also with options(currr.n_checkpoint = 200).
Additional options: currr.unchanged_message (TRUE/FALSE), currr.progress_length
Value
A logical vector.
See Also
Other map variants: cp_map_chr(), cp_map_dbl(), cp_map_dfc(), cp_map_dfr(), cp_map()
Examples
# Run them on console!
# (functions need writing and reading access to your working directory and they also print)
avg_n <- function(.data, .col, x) {
Sys.sleep(.01)
.data |>
dplyr::pull({{ .col }}) |>
(\(m) mean(m) * x) ()
}
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
# same function, read from cache
cp_map(.x = 1:10, .f = avg_n, .data = iris, .col = Sepal.Length, name = "iris_mean")
remove_currr_cache()
remove_currr_cache Remove currr’s intermediate data from the folder.
Description
Remove currr’s intermediate data from the folder.
Usage
remove_currr_cache(list = NULL)
Arguments
list A character vector specifying the name of the caches you want to remove (files
in .currr.data folder). If empy (default), all caches will be removed.
Value
No return value, called for side effects
saving_map Run a map with the function, but saves after a given number of ex-
ecution. This is an internal function, you are not supposed to use it
manually, but can call for background job inly if exported.
Description
Run a map with the function, but saves after a given number of execution. This is an internal
function, you are not supposed to use it manually, but can call for background job inly if exported.
Usage
saving_map(.ids, .f, name, n_checkpoint = 100, currr_folder, ...)
Arguments
.ids Placement of .x to work with.
.f Called function.
name Name for saving.
n_checkpoint Number of checkpoints.
currr_folder Folder where cache files are stored.
... Additionals.
Value
No return value, called for side effects
saving_map_nodot Run a map with the function, but saves after a given number of ex-
ecution. This is an internal function, you are not supposed to use it
manually, but can call for background job only if exported. This func-
tion differs from saving_map, since it does not have a ... input. This is
neccessary because job::job fails if ... is not provided for the cp_map
call.
Description
Run a map with the function, but saves after a given number of execution. This is an internal
function, you are not supposed to use it manually, but can call for background job only if exported.
This function differs from saving_map, since it does not have a ... input. This is neccessary because
job::job fails if ... is not provided for the cp_map call.
Usage
saving_map_nodot(.ids, .f, name, n_checkpoint = 100, currr_folder)
Arguments
.ids Placement of .x to work with.
.f Called function.
name Name for saving.
n_checkpoint Number of checkpoints.
currr_folder Folder where cache files are stored.
Value
No return value, called for side effects |
@types/system-task | npm | JavaScript | [Installation](#installation)
===
> `npm install --save @types/system-task`
[Summary](#summary)
===
This package contains type definitions for system-task (<https://github.com/leocwlam/system-task>).
[Details](#details)
===
Files were exported from <https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/system-task>.
[index.d.ts](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/system-task/index.d.ts)
---
```
declare class SystemTask {
type: string;
constructor(taskType?: string, isAsyncProcess?: boolean, logMethod?: any);
/**
* @async
*/
log(type: string, message: string, detail?: any): void;
/**
* @async
*/
insertPreprocessItemsHandler(task: SystemTask): Promise<any>;
/**
* @async
*/
preprocessHandler(task: SystemTask, preProcessItem: any): Promise<any>;
/**
* @async
*/
processHandler(task: SystemTask, processItem: any): Promise<any>;
/**
* @async
*/
cleanupHandler(task: SystemTask, cleanupItems: any[]): Promise<any>;
isValidProcess(): void;
/**
* @async
*/
start(): void;
}
declare function asyncProcess(items: any[], executeAsyncCall: any, task: SystemTask, errors: any[]): Promise<any>;
declare function syncProcess(items: any[], executeSyncCall: any, task: SystemTask, errors: any[]): Promise<any>;
declare namespace SystemTask {
/**
* @async
*/
const SyncProcess: typeof syncProcess;
/**
* @async
*/
const AsyncProcess: typeof asyncProcess;
}
export = SystemTask;
```
### [Additional Details](#additional-details)
* Last updated: Wed, 18 Oct 2023 11:45:06 GMT
* Dependencies: none
[Credits](#credits)
===
These definitions were written by [<NAME>](https://github.com/leocwlam).
Readme
---
### Keywords
none |
re_memory | rust | Rust | Crate re_memory
===
Run-time memory tracking and profiling.
See `AccountingAllocator` and `accounting_allocator`.
Re-exports
---
* `pub use accounting_allocator::AccountingAllocator;`
Modules
---
* accounting_allocatorTrack allocations and memory use.
* util
Structs
---
* CountAndSizeNumber of allocation and their total size.
* MemoryHistoryTracks memory use over time.
* MemoryLimit
* MemoryUse
* RamLimitWarner
Functions
---
* total_ram_in_bytesAmount of available RAM on this machine.
Crate re_memory
===
Run-time memory tracking and profiling.
See `AccountingAllocator` and `accounting_allocator`.
Re-exports
---
* `pub use accounting_allocator::AccountingAllocator;`
Modules
---
* accounting_allocatorTrack allocations and memory use.
* util
Structs
---
* CountAndSizeNumber of allocation and their total size.
* MemoryHistoryTracks memory use over time.
* MemoryLimit
* MemoryUse
* RamLimitWarner
Functions
---
* total_ram_in_bytesAmount of available RAM on this machine.
Struct re_memory::accounting_allocator::AccountingAllocator
===
```
pub struct AccountingAllocator<InnerAllocator> { /* private fields */ }
```
Install this as the global allocator to get memory usage tracking.
Use `set_tracking_callstacks` or `turn_on_tracking_if_env_var` to turn on memory tracking.
Collect the stats with `tracking_stats`.
Usage:
```
use re_memory::AccountingAllocator;
#[global_allocator]
static GLOBAL: AccountingAllocator<std::alloc::System> = AccountingAllocator::new(std::alloc::System);
```
Implementations
---
### impl<InnerAllocator> AccountingAllocator<InnerAllocator#### pub const fn new(allocator: InnerAllocator) -> Self
Trait Implementations
---
### impl<InnerAllocator: Default> Default for AccountingAllocator<InnerAllocator#### fn default() -> AccountingAllocator<InnerAllocatorReturns the “default value” for a type.
Allocate memory as described by the given `layout`.
Behaves like `alloc`, but also ensures that the contents are set to zero before being returned.
Deallocate the block of memory at the given `ptr` pointer with the given `layout`.
&self,
old_ptr: *mutu8,
layout: Layout,
new_size: usize
) -> *mutu8
Shrink or grow a block of memory to the given `new_size` in bytes.
The block is described by the given `ptr` pointer and `layout`. Read moreAuto Trait Implementations
---
### impl<InnerAllocator> RefUnwindSafe for AccountingAllocator<InnerAllocator>where
InnerAllocator: RefUnwindSafe,
### impl<InnerAllocator> Send for AccountingAllocator<InnerAllocator>where
InnerAllocator: Send,
### impl<InnerAllocator> Sync for AccountingAllocator<InnerAllocator>where
InnerAllocator: Sync,
### impl<InnerAllocator> Unpin for AccountingAllocator<InnerAllocator>where
InnerAllocator: Unpin,
### impl<InnerAllocator> UnwindSafe for AccountingAllocator<InnerAllocator>where
InnerAllocator: UnwindSafe,
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Module re_memory::accounting_allocator
===
Track allocations and memory use.
Structs
---
* AccountingAllocatorInstall this as the global allocator to get memory usage tracking.
* TrackingStatistics
Functions
---
* global_allocsTotal number of live allocations,
and the number of live bytes allocated as tracked by `AccountingAllocator`.
* is_tracking_callstacksAre we doing (slightly expensive) tracking of the callstacks of large allocations?
* set_tracking_callstacksShould we do (slightly expensive) tracking of the callstacks of large allocations?
* tracking_statsGather statistics from the live tracking, if enabled.
* turn_on_tracking_if_env_varTurn on callstack tracking (slightly expensive) if a given env-var is set.
Struct re_memory::CountAndSize
===
```
pub struct CountAndSize {
pub count: usize,
pub size: usize,
}
```
Number of allocation and their total size.
Fields
---
`count: usize`Number of allocations.
`size: usize`Number of bytes.
Implementations
---
### impl CountAndSize
#### pub const ZERO: Self = _
#### pub fn add(&mut self, size: usize)
Add an allocation.
#### pub fn sub(&mut self, size: usize)
Remove an allocation.
Trait Implementations
---
### impl Clone for CountAndSize
#### fn clone(&self) -> CountAndSize
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> CountAndSize
Returns the “default value” for a type.
#### fn hash<__H: Hasher>(&self, state: &mut __H)
Feeds this value into the given `Hasher`. Read more1.3.0 · source#### fn hash_slice<H>(data: &[Self], state: &mut H)where
H: Hasher,
Self: Sized,
Feeds a slice of this type into the given `Hasher`.
#### fn eq(&self, other: &CountAndSize) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for CountAndSize
### impl Eq for CountAndSize
### impl StructuralEq for CountAndSize
### impl StructuralPartialEq for CountAndSize
Auto Trait Implementations
---
### impl RefUnwindSafe for CountAndSize
### impl Send for CountAndSize
### impl Sync for CountAndSize
### impl Unpin for CountAndSize
### impl UnwindSafe for CountAndSize
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct re_memory::MemoryHistory
===
```
pub struct MemoryHistory {
pub resident: History<i64>,
pub counted: History<i64>,
pub counted_gpu: History<i64>,
pub counted_store: History<i64>,
pub counted_blueprint: History<i64>,
}
```
Tracks memory use over time.
Fields
---
`resident: History<i64>`Bytes allocated by the application according to operating system.
Resident Set Size (RSS) on Linux, Android, Mac, iOS.
Working Set on Windows.
`counted: History<i64>`Bytes used by the application according to our own memory allocator’s accounting.
This can be smaller than `Self::resident` because our memory allocator may not return all the memory we free to the OS.
`counted_gpu: History<i64>`VRAM bytes used by the application according to its own accounting if a tracker was installed.
Values are usually a rough estimate as the actual amount of VRAM used depends a lot on the specific GPU and driver. Accounted typically only raw buffer & texture sizes.
`counted_store: History<i64>`Bytes used by the datastore according to its own accounting.
`counted_blueprint: History<i64>`Bytes used by the blueprint store according to its own accounting.
Implementations
---
### impl MemoryHistory
#### pub fn is_empty(&self) -> bool
#### pub fn capture(
&mut self,
counted_gpu: Option<i64>,
counted_store: Option<i64>,
counted_blueprint: Option<i64>
)
Add data to history
Trait Implementations
---
### impl Default for MemoryHistory
#### fn default() -> Self
Returns the “default value” for a type. Read moreAuto Trait Implementations
---
### impl RefUnwindSafe for MemoryHistory
### impl Send for MemoryHistory
### impl Sync for MemoryHistory
### impl Unpin for MemoryHistory
### impl UnwindSafe for MemoryHistory
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct re_memory::MemoryLimit
===
```
pub struct MemoryLimit {
pub limit: Option<i64>,
}
```
Fields
---
`limit: Option<i64>`Limit in bytes.
This is primarily compared to what is reported by `crate::AccountingAllocator` (‘counted’).
We limit based on this instead of `resident` (RSS) because `counted` is what we have immediate control over, while RSS depends on what our allocator (MiMalloc) decides to do.
Implementations
---
### impl MemoryLimit
#### pub fn parse(limit: &str) -> Result<Self, StringThe limit can either be absolute (e.g. “16GB”) or relative (e.g. “50%”).
#### pub fn is_exceeded_by(&self, mem_use: &MemoryUse) -> Option<f32Returns how large fraction of memory we should free to go down to the exact limit.
Trait Implementations
---
### impl Clone for MemoryLimit
#### fn clone(&self) -> MemoryLimit
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn default() -> MemoryLimit
Returns the “default value” for a type.
#### fn eq(&self, other: &MemoryLimit) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for MemoryLimit
### impl Eq for MemoryLimit
### impl StructuralEq for MemoryLimit
### impl StructuralPartialEq for MemoryLimit
Auto Trait Implementations
---
### impl RefUnwindSafe for MemoryLimit
### impl Send for MemoryLimit
### impl Sync for MemoryLimit
### impl Unpin for MemoryLimit
### impl UnwindSafe for MemoryLimit
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct re_memory::MemoryUse
===
```
pub struct MemoryUse {
pub resident: Option<i64>,
pub counted: Option<i64>,
}
```
Fields
---
`resident: Option<i64>`Bytes allocated by the application according to operating system.
Resident Set Size (RSS) on Linux, Android, Mac, iOS.
Working Set on Windows.
`None` if unknown.
`counted: Option<i64>`Bytes used by the application according to our own memory allocator’s accounting.
This can be smaller than `Self::resident` because our memory allocator may not return all the memory we free to the OS.
`None` if `crate::AccountingAllocator` is not used.
Implementations
---
### impl MemoryUse
#### pub fn capture() -> Self
Trait Implementations
---
### impl Clone for MemoryUse
#### fn clone(&self) -> MemoryUse
Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self)
Performs copy-assignment from `source`.
#### fn fmt(&self, f: &mut Formatter<'_>) -> Result
Formats the value using the given formatter.
#### fn eq(&self, other: &MemoryUse) -> bool
This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool
This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Sub<MemoryUse> for MemoryUse
#### type Output = MemoryUse
The resulting type after applying the `-` operator.#### fn sub(self, rhs: Self) -> Self::Output
Performs the `-` operation.
### impl Eq for MemoryUse
### impl StructuralEq for MemoryUse
### impl StructuralPartialEq for MemoryUse
Auto Trait Implementations
---
### impl RefUnwindSafe for MemoryUse
### impl Send for MemoryUse
### impl Sync for MemoryUse
### impl Unpin for MemoryUse
### impl UnwindSafe for MemoryUse
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
T: Clone,
#### fn __clone_box(&self, _: Private) -> *mut()
### impl<T> From<T> for T
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T> ToOwned for Twhere
T: Clone,
#### type Owned = T
The resulting type after obtaining ownership.#### fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning.
Uses borrowed data to replace owned data, usually by cloning.
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Struct re_memory::RamLimitWarner
===
```
pub struct RamLimitWarner { /* private fields */ }
```
Implementations
---
### impl RamLimitWarner
#### pub fn warn_at_fraction_of_max(fraction: f32) -> Self
#### pub fn update(&mut self)
Warns if we have exceeded the limit.
Auto Trait Implementations
---
### impl RefUnwindSafe for RamLimitWarner
### impl Send for RamLimitWarner
### impl Sync for RamLimitWarner
### impl Unpin for RamLimitWarner
### impl UnwindSafe for RamLimitWarner
Blanket Implementations
---
### impl<T> Any for Twhere
T: 'static + ?Sized,
#### fn type_id(&self) -> TypeId
Gets the `TypeId` of `self`.
T: ?Sized,
#### fn borrow(&self) -> &T
Immutably borrows from an owned value.
T: ?Sized,
#### fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value.
#### fn from(t: T) -> T
Returns the argument unchanged.
### impl<T> Instrument for T
#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an
`Instrumented` wrapper.
`Instrumented` wrapper.
U: From<T>,
#### fn into(self) -> U
Calls `U::from(self)`.
That is, this conversion is whatever the implementation of
`From<T> for U` chooses to do.
### impl<T, U> TryFrom<U> for Twhere
U: Into<T>,
#### type Error = Infallible
The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere
U: TryFrom<T>,
#### type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> WithSubscriber for T
#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where
S: Into<Dispatch>,
Attaches the provided `Subscriber` to this type, returning a
`WithDispatch` wrapper.
`WithDispatch` wrapper. Read more
Function re_memory::total_ram_in_bytes
===
```
pub fn total_ram_in_bytes() -> u64
```
Amount of available RAM on this machine. |
packagist_lazetime_amazon_advertising_api_php_sdk.jsonl | personal_doc | Unknown | "# Move your business forward\n\n### What brands can learn from a best-selling author who uses Amazo(...TRUNCATED) |
ormBigData | cran | R | "Package ‘ormBigData’\n October 14, 2022\nTitle Fitting(...TRUNCATED) |
go.opentelemetry.io/collector/config/configauth | go | Go | "README\n [¶](#section-readme)\n---\n\n### Authentication configuration\n\nThis module defines nece(...TRUNCATED) |
artificery | hex | Erlang | "Artificery\n===\n\n[![Module Version](https://img.shields.io/hexpm/v/artificery.svg)](https://hex.p(...TRUNCATED) |
rusoto_docdb | rust | Rust | "Crate rusoto_docdb\n===\n\nAmazon DocumentDB API documentation\n\nIf you’re using the service, yo(...TRUNCATED) |
Dataset Card
This dataset is the code documenation dataset used in StarCoder2 pre-training, and it is also part of the-stack-v2-train-extras descried in the paper.
Dataset Details
Overview
This dataset comprises a comprehensive collection of crawled documentation and code-related resources sourced from various package manager platforms and programming language documentation sites. It focuses on popular libraries, free programming books, and other relevant materials, facilitating research in software development, programming language trends, and documentation analysis.
How to Use it
from datasets import load_dataset
ds = load_dataset("SivilTaram/starcoder2-documentation")
Data Fields
project
(string
): The name or identifier of the project on each platform.source
(string
): The platform from which the documentation data is sourced.language
(string
): The identified programming language associated with the project.content
(string
): The text content of each document, formatted in Markdown.
Related Resources
For additional tools and methods related to converting HTML to Markdown, refer to the GitHub repository: code-html-to-markdown.
Data Sources
Package Managers:
- npm: Node.js package manager.
- PyPI: Python Package Index.
- Go Packages: Go programming language packages.
- Packagist: PHP package repository.
- Rubygems: Ruby package manager.
- Cargo: Rust package manager.
- CocoaPods: Dependency manager for Swift and Objective-C Cocoa projects.
- Bower: Front-end package manager.
- CPAN: Comprehensive Perl Archive Network.
- Clojars: Clojure library repository.
- Conda: Package manager for data science and scientific computing.
- Hex: Package manager for the Elixir programming language.
- Julia: Package manager for the Julia programming language.
Documentation Websites:
- A carefully curated list of programming-related websites, including Read the Docs and other well-known resources.
Free Programming Books:
- Sources from the Free Programming Books project, which promotes the availability of free programming e-books across various languages.
Data Collection Process
Library Retrieval:
- The process begins by identifying the most popular libraries across the aforementioned platforms using libraries.io.
- These library names serve as search queries to obtain their respective homepages.
Documentation Extraction:
- Homepage Links: Documentation files are crawled from the retrieved homepage links. If no dedicated documentation is found, README or equivalent files on the package manager platforms are utilized.
- Processing Strategy: For documents obtained through homepage links, the same processing strategy is applied as outlined for website crawls, ensuring consistent formatting and extraction quality.
- Prioritization: For libraries hosted on PyPI and Conda, documentation on Read the Docs is prioritized due to its comprehensive nature.
PDF Extraction:
- For R language documentation, text is extracted from all PDFs hosted on CRAN using the pdftotext library, which effectively preserves formatting.
- For LaTeX packages, documentation, tutorials, and usage guide PDFs from CTAN are filtered, excluding image-heavy PDFs, and converted to markdown using the Nougat neural OCR tool.
Web Crawling:
- Code documentation is collected from a curated list of websites by exploring from an initial URL, and the full list of all URLs can be found in the StarCoder2 paper.
- A dynamic queue is employed to store URLs within the same domain, expanding as new links are discovered during the crawl.
- The process focuses on (1) content extraction and (2) content concatenation:
- Content Extraction: HTML pages are converted to XML using the trafilatura library, which eliminates redundant navigation elements.
- Content Concatenation: Extracted content from different HTML pages is subjected to near-duplication checks using the minhash locality-sensitive hashing technique, applying a threshold of 0.7 to ensure unique content is retained.
Free Textbooks:
- The dataset includes free programming books collected from the Free Programming Books Project. Links with a PDF extension are extracted, and all available PDFs are downloaded and processed for text extraction using the pdf2text library.
Language Identification:
- A dual approach is utilized to identify the primary programming language of each document:
- Predefined Rules: Applied when the document's source explicitly corresponds to a specific programming language.
- Guesslang Library: Used in cases where the correspondence is not clear.
- A dual approach is utilized to identify the primary programming language of each document:
Dataset Characteristics
- Languages Covered: English, Chinese, Japanese, Spanish, and others.
- Document Types:
- Code documentation files
- PDF documents
- HTML pages
- E-books
- Programming Languages Included:
- Python
- JavaScript
- Rust
- R
- Go
- PHP
- Ruby
- Haskell
- Objective-C
- SQL
- YAML
- TeX
- Markdown
- And more...
Use Cases
- Analyzing trends in programming language documentation.
- Researching software development resources across multiple platforms.
- Training large language models on documentation datasets to better understand programming languages.
- Understanding the structure and accessibility of programming documentation.
Citation
@article{DBLP:journals/corr/abs-2402-19173,
author = {Anton Lozhkov and
Raymond Li and
Loubna Ben Allal and
Federico Cassano and
Joel Lamy{-}Poirier and
Nouamane Tazi and
Ao Tang and
Dmytro Pykhtar and
Jiawei Liu and
Yuxiang Wei and
Tianyang Liu and
Max Tian and
Denis Kocetkov and
Arthur Zucker and
Younes Belkada and
Zijian Wang and
Qian Liu and
Dmitry Abulkhanov and
Indraneil Paul and
Zhuang Li and
Wen{-}Ding Li and
Megan Risdal and
Jia Li and
Jian Zhu and
Terry Yue Zhuo and
Evgenii Zheltonozhskii and
Nii Osae Osae Dade and
Wenhao Yu and
Lucas Krau{\ss} and
Naman Jain and
Yixuan Su and
Xuanli He and
Manan Dey and
Edoardo Abati and
Yekun Chai and
Niklas Muennighoff and
Xiangru Tang and
Muhtasham Oblokulov and
Christopher Akiki and
Marc Marone and
Chenghao Mou and
Mayank Mishra and
Alex Gu and
Binyuan Hui and
Tri Dao and
Armel Zebaze and
Olivier Dehaene and
Nicolas Patry and
Canwen Xu and
Julian J. McAuley and
Han Hu and
Torsten Scholak and
S{\'{e}}bastien Paquet and
Jennifer Robinson and
Carolyn Jane Anderson and
Nicolas Chapados and
et al.},
title = {StarCoder 2 and The Stack v2: The Next Generation},
journal = {CoRR},
volume = {abs/2402.19173},
year = {2024},
url = {https://doi.org/10.48550/arXiv.2402.19173},
doi = {10.48550/ARXIV.2402.19173},
eprinttype = {arXiv},
eprint = {2402.19173},
timestamp = {Tue, 06 Aug 2024 08:17:53 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2402-19173.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
- Downloads last month
- 95