The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code:   DatasetGenerationError
Exception:    ArrowIndexError
Message:      array slice would exceed array length
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table
                  pa_table = pa_table.combine_chunks()
                File "pyarrow/table.pxi", line 3638, in pyarrow.lib.Table.combine_chunks
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowIndexError: array slice would exceed array length
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
| organization
				 string | repo_name
				 string | base_commit
				 string | iss_html_url
				 string | iss_label
				 string | title
				 string | body
				 string | code
				 null | pr_html_url
				 string | commit_html_url
				 null | file_loc
				 string | own_code_loc
				 list | ass_file_loc
				 list | other_rep_loc
				 list | analysis
				 dict | loctype
				 dict | iss_has_pr
				 int64 | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 
	scikit-learn | 
	scikit-learn | 
	559609fe98ec2145788133687e64a6e87766bc77 | 
	https://github.com/scikit-learn/scikit-learn/issues/25525 | 
	Bug
module:feature_extraction | 
	Extend SequentialFeatureSelector example to demonstrate how to use negative tol | 
	### Describe the bug
I utilized the **SequentialFeatureSelector** for feature selection in my code, with the direction set to "backward." The tolerance value is negative and the selection process stops when the decrease in the metric, AUC in this case, is less than the specified tolerance. Generally, increasing the number of features results in a higher AUC, but sacrificing some features, especially correlated ones that offer little contribution, can produce a pessimistic model with a lower AUC. The code worked as expected in **sklearn 1.1.1**, but when I updated to **sklearn 1.2.1**, I encountered the following error.
### Steps/Code to Reproduce
```python
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import SequentialFeatureSelector
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
X, y = load_breast_cancer(return_X_y=True)
TOL = -0.001
feature_selector = SequentialFeatureSelector(
                    LogisticRegression(max_iter=1000),
                    n_features_to_select="auto",
                    direction="backward",
                    scoring="roc_auc",
                    tol=TOL
                )
pipe = Pipeline(
    [('scaler', StandardScaler()), 
    ('feature_selector', feature_selector), 
    ('log_reg', LogisticRegression(max_iter=1000))]
    )
if __name__ == "__main__":
    pipe.fit(X, y)
    print(pipe['log_reg'].coef_[0])
```
### Expected Results
```
$ python sfs_tol.py 
[-2.0429818   0.5364346  -1.35765488 -2.85009904 -2.84603016]
```
### Actual Results
```python-traceback
$ python sfs_tol.py 
Traceback (most recent call last):
  File "/home/modelling/users-workspace/nsofinij/lab/open-source/sfs_tol.py", line 28, in <module>
    pipe.fit(X, y)
  File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py", line 401, in fit
    Xt = self._fit(X, y, **fit_params_steps)
  File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py", line 359, in _fit
    X, fitted_transformer = fit_transform_one_cached(
  File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/joblib/memory.py", line 349, in __call__
    return self.func(*args, **kwargs)
  File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py", line 893, in _fit_transform_one
    res = transformer.fit_transform(X, y, **fit_params)
  File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/utils/_set_output.py", line 142, in wrapped
    data_to_wrap = f(self, X, *args, **kwargs)
  File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/base.py", line 862, in fit_transform
    return self.fit(X, y, **fit_params).transform(X)
  File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_selection/_sequential.py", line 201, in fit
    self._validate_params()
  File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/base.py", line 581, in _validate_params
    validate_parameter_constraints(
  File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/utils/_param_validation.py", line 97, in validate_parameter_constraints
    raise InvalidParameterError(
sklearn.utils._param_validation.InvalidParameterError: The 'tol' parameter of SequentialFeatureSelector must be None or a float in the range (0, inf). Got -0.001 instead.
```
### Versions
```shell
System:
    python: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:26:04) [GCC 10.4.0]
executable: /home/modelling/opt/anaconda3/envs/py310/bin/python
   machine: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.26
Python dependencies:
      sklearn: 1.2.1
          pip: 23.0
   setuptools: 66.1.1
        numpy: 1.24.1
        scipy: 1.10.0
       Cython: None
       pandas: 1.5.3
   matplotlib: 3.6.3
       joblib: 1.2.0
threadpoolctl: 3.1.0
Built with OpenMP: True
threadpoolctl info:
       user_api: openmp
   internal_api: openmp
         prefix: libgomp
       filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0
        version: None
    num_threads: 64
       user_api: blas
   internal_api: openblas
         prefix: libopenblas
       filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/numpy.libs/libopenblas64_p-r0-15028c96.3.21.so
        version: 0.3.21
threading_layer: pthreads
   architecture: SkylakeX
    num_threads: 64
       user_api: blas
   internal_api: openblas
         prefix: libopenblas
       filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/scipy.libs/libopenblasp-r0-41284840.3.18.so
        version: 0.3.18
threading_layer: pthreads
   architecture: SkylakeX
    num_threads: 64
```
 | null | 
	https://github.com/scikit-learn/scikit-learn/pull/26205 | null | 
	{'base_commit': '559609fe98ec2145788133687e64a6e87766bc77', 'files': [{'path': 'examples/feature_selection/plot_select_from_model_diabetes.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [145], 'mod': [123, 124, 125]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "1",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "examples/feature_selection/plot_select_from_model_diabetes.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | null | 
| 
	pallets | 
	flask | 
	cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4 | 
	https://github.com/pallets/flask/issues/2264 | 
	cli | 
	Handle app factory in FLASK_APP | 
	`FLASK_APP=myproject.app:create_app('dev')`
[
Gunicorn does this with `eval`](https://github.com/benoitc/gunicorn/blob/fbd151e9841e2c87a18512d71475bcff863a5171/gunicorn/util.py#L364), which I'm not super happy with. Instead, we could use `literal_eval` to allow a simple list of arguments. The line should never be so complicated that `eval` would be necessary anyway.
~~~python
# might need to fix this regex
m = re.search(r'(\w+)(\(.*\))', app_obj)
if m:
    app = getattr(mod, m.group(1))(*literal_eval(m.group(2)))
~~~ | null | 
	https://github.com/pallets/flask/pull/2326 | null | 
	{'base_commit': 'cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4', 'files': [{'path': 'flask/cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 12]}, "(None, 'find_best_app', 32)": {'mod': [58, 62, 69, 71]}, "(None, 'call_factory', 82)": {'mod': [82, 83, 84, 85, 86, 88, 89, 90, 91, 92, 93]}, "(None, 'locate_app', 125)": {'mod': [151, 153, 154, 155, 156, 158]}}}, {'path': 'tests/test_cli.py', 'status': 'modified', 'Loc': {"(None, 'test_locate_app', 148)": {'add': [152], 'mod': [154, 155, 156, 157, 158, 159, 160, 161]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "flask/cli.py"
  ],
  "doc": [],
  "test": [
    "tests/test_cli.py"
  ],
  "config": [],
  "asset": []
} | 1 | 
| 
	localstack | 
	localstack | 
	737ca72b7bce6e377dd6876eacee63338fa8c30c | 
	https://github.com/localstack/localstack/issues/894 | 
	ERROR:localstack.services.generic_proxy: Error forwarding request: | 
	Starting local dev environment. CTRL-C to quit.
Starting mock API Gateway (http port 4567)...
Starting mock DynamoDB (http port 4569)...
Starting mock SES (http port 4579)...
Starting mock Kinesis (http port 4568)...
Starting mock Redshift (http port 4577)...
Starting mock S3 (http port 4572)...
Starting mock CloudWatch (http port 4582)...
Starting mock CloudFormation (http port 4581)...
Starting mock SSM (http port 4583)...
Starting mock SQS (http port 4576)...
Starting local Elasticsearch (http port 4571)...
Starting mock SNS (http port 4575)...
Starting mock DynamoDB Streams service (http port 4570)...
Starting mock Firehose service (http port 4573)...
Starting mock Route53 (http port 4580)...
Starting mock ES service (http port 4578)...
Starting mock Lambda service (http port 4574)...
2018-08-11T13:33:08:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d442415d0>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
  File "/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
    headers=forward_headers)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
    r = adapter.send(request, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
    raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d442415d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
2018-08-11T13:34:08:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4425fa10>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
  File "/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
    headers=forward_headers)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
    r = adapter.send(request, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
    raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4425fa10>: Failed to establish a new connection: [Errno 111] Connection refused',))
2018-08-11T13:35:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4429c3d0>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
  File "/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
    headers=forward_headers)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
    r = adapter.send(request, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
    raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4429c3d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
2018-08-11T13:36:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4425f910>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
  File "/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
    headers=forward_headers)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
    r = adapter.send(request, **kwargs)
  File "/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
    raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4425f910>: Failed to establish a new connection: [Errno 111] Connection refused',))
 | null | 
	https://github.com/localstack/localstack/pull/1526 | null | 
	{'base_commit': '737ca72b7bce6e377dd6876eacee63338fa8c30c', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [186]}}}, {'path': 'localstack/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'localstack/services/kinesis/kinesis_starter.py', 'status': 'modified', 'Loc': {"(None, 'start_kinesis', 14)": {'add': [17], 'mod': [14, 23, 24]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "localstack/config.py",
    "localstack/services/kinesis/kinesis_starter.py"
  ],
  "doc": [
    "README.md"
  ],
  "test": [],
  "config": [],
  "asset": []
} | 1 | |
| 
	huggingface | 
	transformers | 
	d2871b29754abd0f72cf42c299bb1c041519f7bc | 
	https://github.com/huggingface/transformers/issues/30 | 
	[Feature request] Add example of finetuning the pretrained models on custom corpus | null | 
	https://github.com/huggingface/transformers/pull/25107 | null | 
	{'base_commit': 'd2871b29754abd0f72cf42c299bb1c041519f7bc', 'files': [{'path': 'src/transformers/modeling_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [75, 108]}, "('PreTrainedModel', 'from_pretrained', 1959)": {'add': [2227]}, "(None, 'load_state_dict', 442)": {'mod': [461]}, "('PreTrainedModel', '_load_pretrained_model', 3095)": {'mod': [3183, 3388, 3389, 3390, 3391, 3392, 3393, 3394, 3395, 3396, 3397, 3398, 3399, 3400, 3401, 3402, 3403, 3404]}}}, {'path': 'src/transformers/trainer.py', 'status': 'modified', 'Loc': {"('Trainer', '__init__', 313)": {'mod': [468, 469, 470]}, "('Trainer', '_wrap_model', 1316)": {'mod': [1382, 1385, 1387]}, "('Trainer', 'train', 1453)": {'mod': [1520]}, "('Trainer', '_inner_training_loop', 1552)": {'mod': [1654]}, "('Trainer', 'create_accelerator_and_postprocess', 3866)": {'mod': [3889]}}}, {'path': 'src/transformers/training_args.py', 'status': 'modified', 'Loc': {"('TrainingArguments', None, 158)": {'add': [464], 'mod': [439, 442, 445, 457]}, "('TrainingArguments', '__post_init__', 1221)": {'add': [1522, 1524, 1585], 'mod': [1529, 1530, 1531, 1533, 1534, 1535, 1536, 1537, 1543, 1544, 1547, 1548, 1550, 1551, 1555, 1556, 1558, 1559, 1560, 1589, 1591, 1593, 1594, 1595, 1596, 1597, 1598, 1599, 1602]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "src/transformers/trainer.py",
    "src/transformers/modeling_utils.py",
    "src/transformers/training_args.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | ||
| 
	pandas-dev | 
	pandas | 
	51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319 | 
	https://github.com/pandas-dev/pandas/issues/11080 | 
	Indexing
Performance | 
	PERF: checking is_monotonic_increasing/decreasing before sorting on an index | 
	We don't keep the sortedness state in an index per-se, but it is rather cheap to check
- `is_monotonic_increasing` or `is_monotonic_decreasing` on a reg-index 
- MultiIndex should check `is_lexsorted` (this might be done already)
```
In [8]: df = DataFrame(np.random.randn(1000000,2),columns=list('AB'))
In [9]: %timeit df.sort_index()
10 loops, best of 3: 37.1 ms per loop
In [10]: %timeit -n 1 -r 1 df.index.is_monotonic_increasing
1 loops, best of 1: 2.01 ms per loop
In [11]: %timeit -n 1 -r 1 df.index.is_monotonic_increasin^C
KeyboardInterrupt
In [11]: %timeit df.set_index('A').sort_index()
10 loops, best of 3: 175 ms per loop
In [12]: %timeit -n 1 -r 1 df.set_index('A').index.is_monotonic_increasing
1 loops, best of 1: 9.54 ms per loop
```
 | null | 
	https://github.com/pandas-dev/pandas/pull/11294 | null | 
	{'base_commit': '51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319', 'files': [{'path': 'asv_bench/benchmarks/frame_methods.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [932]}}}, {'path': 'doc/source/whatsnew/v0.17.1.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [54]}}}, {'path': 'pandas/core/frame.py', 'status': 'modified', 'Loc': {"('DataFrame', 'sort_index', 3126)": {'add': [3159]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "pandas/core/frame.py",
    "asv_bench/benchmarks/frame_methods.py"
  ],
  "doc": [
    "doc/source/whatsnew/v0.17.1.txt"
  ],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	zylon-ai | 
	private-gpt | 
	fdb45741e521d606b028984dbc2f6ac57755bb88 | 
	https://github.com/zylon-ai/private-gpt/issues/10 | 
	Suggestions for speeding up ingestion? | 
	I presume I must be doing something wrong, as it is taking hours to ingest a 500kbyte text on an i9-12900 with 128GB.  In fact it's not even done yet.  Using models are recommended.
Help?
Thanks
Some output:
llama_print_timings:        load time =   674.34 ms
llama_print_timings:      sample time =     0.00 ms /     1 runs   (    0.00 ms per run)
llama_print_timings: prompt eval time = 12526.78 ms /   152 tokens (   82.41 ms per token)
llama_print_timings:        eval time =   157.46 ms /     1 runs   (  157.46 ms per run)
llama_print_timings:       total time = 12715.48 ms | null | 
	https://github.com/zylon-ai/private-gpt/pull/224 | null | 
	{'base_commit': 'fdb45741e521d606b028984dbc2f6ac57755bb88', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4, 15, 17, 23, 25, 28, 58, 62, 86]}}}, {'path': 'example.env', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [2]}}}, {'path': 'ingest.py', 'status': 'modified', 'Loc': {"(None, 'main', 71)": {'add': [79], 'mod': [75, 76, 81, 84, 87, 90]}, '(None, None, None)': {'mod': [22]}}}, {'path': 'privateGPT.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 11]}, "(None, 'main', 20)": {'mod': [21, 22]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "2",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "ingest.py",
    "privateGPT.py"
  ],
  "doc": [
    "README.md"
  ],
  "test": [],
  "config": [
    "example.env"
  ],
  "asset": []
} | 1 | |
| 
	huggingface | 
	transformers | 
	9fef668338b15e508bac99598dd139546fece00b | 
	https://github.com/huggingface/transformers/issues/9 | 
	Crash at the end of training | 
	Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output:
I was running the code, unmodified, from commit 3bfbc21376af691b912f3b6256bbeaf8e0046ba8
Is this an issue you know about?
```
11/08/2018 17:50:03 - INFO - __main__ -   device cuda n_gpu 1 distributed training False
11/08/2018 17:50:18 - INFO - __main__ -   *** Example ***
11/08/2018 17:50:18 - INFO - __main__ -   unique_id: 1000000000
11/08/2018 17:50:18 - INFO - __main__ -   example_index: 0
11/08/2018 17:50:18 - INFO - __main__ -   doc_span_index: 0
11/08/2018 17:50:18 - INFO - __main__ -   tokens: [CLS] to whom did the virgin mary allegedly appear in 1858 in lou ##rdes france ? [SEP] architectural ##ly , the school has a catholic character . atop the main building ' s gold dome is a golden statue of the virgin mary . immediately in front of the main building and facing it , is a copper statue of christ with arms up ##rai ##sed with the legend " ve ##ni ##te ad me om ##nes " . next to the main building is the basilica of the sacred heart . immediately behind the basilica is the gr ##otto , a marian place of prayer and reflection . it is a replica of the gr ##otto at lou ##rdes , france where the virgin mary reputed ##ly appeared to saint bern ##ade ##tte so ##ub ##iro ##us in 1858 . at the end of the main drive ( and in a direct line that connects through 3 statues and the gold dome ) , is a simple , modern stone statue of mary . [SEP]
11/08/2018 17:50:18 - INFO - __main__ -   token_to_orig_map: 17:0 18:0 19:0 20:1 21:2 22:3 23:4 24:5 25:6 26:6 27:7 28:8 29:9 30:10 31:10 32:10 33:11 34:12 35:13 36:14 37:15 38:16 39:17 40:18 41:19 42:20 43:20 44:21 45:22 46:23 47:24 48:25 49:26 50:27 51:28 52:29 53:30 54:30 55:31 56:32 57:33 58:34 59:35 60:36 61:37 62:38 63:39 64:39 65:39 66:40 67:41 68:42 69:43 70:43 71:43 72:43 73:44 74:45 75:46 76:46 77:46 78:46 79:47 80:48 81:49 82:50 83:51 84:52 85:53 86:54 87:55 88:56 89:57 90:58 91:58 92:59 93:60 94:61 95:62 96:63 97:64 98:65 99:65 100:65 101:66 102:67 103:68 104:69 105:70 106:71 107:72 108:72 109:73 110:74 111:75 112:76 113:77 114:78 115:79 116:79 117:80 118:81 119:81 120:81 121:82 122:83 123:84 124:85 125:86 126:87 127:87 128:88 129:89 130:90 131:91 132:91 133:91 134:92 135:92 136:92 137:92 138:93 139:94 140:94 141:95 142:96 143:97 144:98 145:99 146:100 147:101 148:102 149:102 150:103 151:104 152:105 153:106 154:107 155:108 156:109 157:110 158:111 159:112 160:113 161:114 162:115 163:115 164:115 165:116 166:117 167:118 168:118 169:119 170:120 171:121 172:122 173:123 174:123
11/08/2018 17:50:18 - INFO - __main__ -   token_is_max_context: 17:True 18:True 19:True 20:True 21:True 22:True 23:True 24:True 25:True 26:True 27:True 28:True 29:True 30:True 31:True 32:True 33:True 34:True 35:True 36:True 37:True 38:True 39:True 40:True 41:True 42:True 43:True 44:True 45:True 46:True 47:True 48:True 49:True 50:True 51:True 52:True 53:True 54:True 55:True 56:True 57:True 58:True 59:True 60:True 61:True 62:True 63:True 64:True 65:True 66:True 67:True 68:True 69:True 70:True 71:True 72:True 73:True 74:True 75:True 76:True 77:True 78:True 79:True 80:True 81:True 82:True 83:True 84:True 85:True 86:True 87:True 88:True 89:True 90:True 91:True 92:True 93:True 94:True 95:True 96:True 97:True 98:True 99:True 100:True 101:True 102:True 103:True 104:True 105:True 106:True 107:True 108:True 109:True 110:True 111:True 112:True 113:True 114:True 115:True 116:True 117:True 118:True 119:True 120:True 121:True 122:True 123:True 124:True 125:True 126:True 127:True 128:True 129:True 130:True 131:True 132:True 133:True 134:True 135:True 136:True 137:True 138:True 139:True 140:True 141:True 142:True 143:True 144:True 145:True 146:True 147:True 148:True 149:True 150:True 151:True 152:True 153:True 154:True 155:True 156:True 157:True 158:True 159:True 160:True 161:True 162:True 163:True 164:True 165:True 166:True 167:True 168:True 169:True 170:True 171:True 172:True 173:True 174:True
11/08/2018 17:50:18 - INFO - __main__ -   input_ids: 101 2000 3183 2106 1996 6261 2984 9382 3711 1999 8517 1999 10223 26371 2605 1029 102 6549 2135 1010 1996 2082 2038 1037 3234 2839 1012 10234 1996 2364 2311 1005 1055 2751 8514 2003 1037 3585 6231 1997 1996 6261 2984 1012 3202 1999 2392 1997 1996 2364 2311 1998 5307 2009 1010 2003 1037 6967 6231 1997 4828 2007 2608 2039 14995 6924 2007 1996 5722 1000 2310 3490 2618 4748 2033 18168 5267 1000 1012 2279 2000 1996 2364 2311 2003 1996 13546 1997 1996 6730 2540 1012 3202 2369 1996 13546 2003 1996 24665 23052 1010 1037 14042 2173 1997 7083 1998 9185 1012 2009 2003 1037 15059 1997 1996 24665 23052 2012 10223 26371 1010 2605 2073 1996 6261 2984 22353 2135 2596 2000 3002 16595 9648 4674 2061 12083 9711 2271 1999 8517 1012 2012 1996 2203 1997 1996 2364 3298 1006 1998 1999 1037 3622 2240 2008 8539 2083 1017 11342 1998 1996 2751 8514 1007 1010 2003 1037 3722 1010 2715 2962 6231 1997 2984 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11/08/2018 17:50:18 - INFO - __main__ -   input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
... [truncated] ...
Iteration: 100%|█████████▉| 29314/29324 [3:27:55<00:04,  2.36it/s][A
Iteration: 100%|█████████▉| 29315/29324 [3:27:55<00:03,  2.44it/s][A
Iteration: 100%|█████████▉| 29316/29324 [3:27:56<00:03,  2.26it/s][A
Iteration: 100%|█████████▉| 29317/29324 [3:27:56<00:02,  2.35it/s][A
Iteration: 100%|█████████▉| 29318/29324 [3:27:56<00:02,  2.44it/s][A
Iteration: 100%|█████████▉| 29319/29324 [3:27:57<00:02,  2.25it/s][A
Iteration: 100%|█████████▉| 29320/29324 [3:27:57<00:01,  2.35it/s][A
Iteration: 100%|█████████▉| 29321/29324 [3:27:58<00:01,  2.41it/s][A
Iteration: 100%|█████████▉| 29322/29324 [3:27:58<00:00,  2.25it/s][A
Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00,  2.36it/s][ATraceback (most recent call last):
  File "code/run_squad.py", line 929, in <module>
    main()
  File "code/run_squad.py", line 862, in main
    loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/0x0d4ff90d01fa4168983197b17d73bb0c_dependencies/code/modeling.py", line 467, in forward
    start_loss = loss_fct(start_logits, start_positions)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 862, in forward
    ignore_index=self.ignore_index, reduction=self.reduction)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1550, in cross_entropy
    return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1403, in nll_loss
    if input.size(0) != target.size(0):
RuntimeError: dimension specified as 0 but tensor has no dimensions
Exception ignored in: <bound method tqdm.__del__ of Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00,  2.36it/s]>
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 931, in __del__
    self.close()
  File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 1133, in close
    self._decr_instances(self)
  File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 496, in _decr_instances
    cls.monitor.exit()
  File "/usr/local/lib/python3.6/dist-packages/tqdm/_monitor.py", line 52, in exit
    self.join()
  File "/usr/lib/python3.6/threading.py", line 1053, in join
    raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
``` | null | 
	https://github.com/huggingface/transformers/pull/16310 | null | 
	{'base_commit': '9fef668338b15e508bac99598dd139546fece00b', 'files': [{'path': 'tests/big_bird/test_modeling_big_bird.py', 'status': 'modified', 'Loc': {"('BigBirdModelTester', '__init__', 47)": {'mod': [73]}, "('BigBirdModelTest', 'test_fast_integration', 561)": {'mod': [584]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "1",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [],
  "doc": [],
  "test": [
    "tests/big_bird/test_modeling_big_bird.py"
  ],
  "config": [],
  "asset": []
} | 1 | |
| 
	psf | 
	requests | 
	ccabcf1fca906bfa6b65a3189c1c41061e6c1042 | 
	https://github.com/psf/requests/issues/3698 | 
	AttributeError: 'NoneType' object has no attribute 'read' | 
	Hello :)
After a recent upgrade for our [coala](https://github.com/coala/coala) project to `requests` 2.12.1 we encounter an exception in our test suites which seems to be caused by `requests`.
Build: https://ci.appveyor.com/project/coala/coala-bears/build/1.0.3537/job/1wm7b4u9yhgkxkgn
Relevant part:
```
================================== FAILURES ===================================
_________________ InvalidLinkBearTest.test_redirect_threshold _________________
self = <tests.general.InvalidLinkBearTest.InvalidLinkBearTest testMethod=test_redirect_threshold>
    def test_redirect_threshold(self):
    
        long_url_redirect = """
            https://bitbucket.org/api/301
            https://bitbucket.org/api/302
            """.splitlines()
    
        short_url_redirect = """
            http://httpbin.org/status/301
            """.splitlines()
    
        self.assertResult(valid_file=long_url_redirect,
                          invalid_file=short_url_redirect,
>                         settings={'follow_redirects': 'yeah'})
tests\general\InvalidLinkBearTest.py:157: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests\general\InvalidLinkBearTest.py:75: in assertResult
    out = list(uut.run("valid", valid_file, **settings))
bears\general\InvalidLinkBear.py:80: in run
    file, timeout, link_ignore_regex):
bears\general\InvalidLinkBear.py:53: in find_links_in_file
    code = InvalidLinkBear.get_status_code(link, timeout)
bears\general\InvalidLinkBear.py:37: in get_status_code
    timeout=timeout).status_code
C:\Python34\lib\site-packages\requests\api.py:96: in head
    return request('head', url, **kwargs)
C:\Python34\lib\site-packages\requests\api.py:56: in request
    return session.request(method=method, url=url, **kwargs)
C:\Python34\lib\site-packages\requests\sessions.py:488: in request
    resp = self.send(prep, **send_kwargs)
C:\Python34\lib\site-packages\requests_mock\mocker.py:69: in _fake_send
    return self._real_send(session, request, **kwargs)
C:\Python34\lib\site-packages\requests\sessions.py:641: in send
    r.content
C:\Python34\lib\site-packages\requests\models.py:772: in content
    self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    def generate():
        # Special case for urllib3.
        if hasattr(self.raw, 'stream'):
            try:
                for chunk in self.raw.stream(chunk_size, decode_content=True):
                    yield chunk
            except ProtocolError as e:
                raise ChunkedEncodingError(e)
            except DecodeError as e:
                raise ContentDecodingError(e)
            except ReadTimeoutError as e:
                raise ConnectionError(e)
        else:
            # Standard file-like object.
            while True:
>               chunk = self.raw.read(chunk_size)
E               AttributeError: 'NoneType' object has no attribute 'read'
C:\Python34\lib\site-packages\requests\models.py:705: AttributeError
```
happens on Windows and Linux.
Thanks in advance :) | null | 
	https://github.com/psf/requests/pull/3718 | null | 
	{'base_commit': 'ccabcf1fca906bfa6b65a3189c1c41061e6c1042', 'files': [{'path': 'requests/models.py', 'status': 'modified', 'Loc': {"('Response', 'content', 763)": {'mod': [772]}}}, {'path': 'tests/test_requests.py', 'status': 'modified', 'Loc': {"('TestRequests', None, 55)": {'add': [1096]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "requests/models.py"
  ],
  "doc": [],
  "test": [
    "tests/test_requests.py"
  ],
  "config": [],
  "asset": []
} | 1 | |
| 
	AntonOsika | 
	gpt-engineer | 
	fc805074be7b3b507bc1699e537f9b691c6f91b9 | 
	https://github.com/AntonOsika/gpt-engineer/issues/674 | 
	bug
documentation | 
	ModuleNotFoundError: No module named 'tkinter' | 
	**Bug description**
When running `gpt-engineer --improve` (using the recent version from PyPI), I get the following output:
```
$ gpt-engineer --improve
Traceback (most recent call last):
  File "/home/.../.local/bin/gpt-engineer", line 5, in <module>
    from gpt_engineer.main import app
  File "/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/main.py", line 12, in <module>
    from gpt_engineer.collect import collect_learnings
  File "/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/collect.py", line 5, in <module>
    from gpt_engineer import steps
  File "/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/steps.py", line 19, in <module>
    from gpt_engineer.file_selector import FILE_LIST_NAME, ask_for_files
  File "/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/file_selector.py", line 4, in <module>
    import tkinter as tk
ModuleNotFoundError: No module named 'tkinter'
```
**Expected behavior**
No error.
In https://github.com/AntonOsika/gpt-engineer/pull/465, no changes where made to the required packages, so tkinter might be added there. (Or made optional.)
EDIT: The error happens always, regardless of the command line parameter. | null | 
	https://github.com/AntonOsika/gpt-engineer/pull/675 | null | 
	{'base_commit': 'fc805074be7b3b507bc1699e537f9b691c6f91b9', 'files': [{'path': 'docs/installation.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [45]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "3",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [],
  "doc": [
    "docs/installation.rst"
  ],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	pallets | 
	flask | 
	85dce2c836fe03aefc07b7f4e0aec575e170f1cd | 
	https://github.com/pallets/flask/issues/593 | 
	blueprints | 
	Nestable blueprints | 
	I'd like to be able to register "sub-blueprints" using `Blueprint.register_blueprint(*args, **kwargs)`. This would register the nested blueprints with an app when the "parent" is registered with it. All parameters are preserved, other than `url_prefix`, which is handled similarly to in `add_url_rule`. A naíve implementation could look like this:
``` python
class Blueprint(object):
    ...
    def register_blueprint(self, blueprint, **options):
        def deferred(state):
            url_prefix = options.get('url_prefix')
            if url_prefix is None:
                url_prefix = blueprint.url_prefix
            if 'url_prefix' in options:
                del options['url_prefix']
            state.app.register_blueprint(blueprint, url_prefix, **options)
        self.record(deferred)
```
 | null | 
	https://github.com/pallets/flask/pull/3923 | null | 
	{'base_commit': '85dce2c836fe03aefc07b7f4e0aec575e170f1cd', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, 71)': {'add': [71]}}}, {'path': 'docs/blueprints.rst', 'status': 'modified', 'Loc': {'(None, None, 122)': {'add': [122]}}}, {'path': 'src/flask/app.py', 'status': 'modified', 'Loc': {"('Flask', '__call__', 1982)": {'add': [1987]}, "('Flask', 'update_template_context', 712)": {'mod': [726, 727, 728]}, "('Flask', 'register_blueprint', 971)": {'mod': [990, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1004]}, "('Flask', '_find_error_handler', 1230)": {'mod': [1238, 1239, 1240, 1241, 1242, 1243, 1244]}, "('Flask', 'preprocess_request', 1741)": {'mod': [1752, 1755, 1756, 1761, 1762]}, "('Flask', 'process_response', 1768)": {'mod': [1782, 1784, 1785]}, "('Flask', 'do_teardown_request', 1794)": {'mod': [1818, 1819, 1820]}}}, {'path': 'src/flask/blueprints.py', 'status': 'modified', 'Loc': {"('BlueprintSetupState', '__init__', 16)": {'add': [47]}, "('Blueprint', '__init__', 141)": {'add': [170]}, "('Blueprint', 'register', 213)": {'add': [225], 'mod': [281, 282, 286, 287, 288, 289, 290, 291, 292, 293]}, "('BlueprintSetupState', 'add_url_rule', 53)": {'mod': [71]}, "('Blueprint', None, 78)": {'mod': [213]}}}, {'path': 'tests/test_blueprints.py', 'status': 'modified', 'Loc': {"(None, 'test_app_url_processors', 828)": {'add': [852]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "src/flask/blueprints.py",
    "src/flask/app.py"
  ],
  "doc": [
    "docs/blueprints.rst",
    "CHANGES.rst"
  ],
  "test": [
    "tests/test_blueprints.py"
  ],
  "config": [],
  "asset": []
} | null | 
| 
	AUTOMATIC1111 | 
	stable-diffusion-webui | 
	f92d61497a426a19818625c3ccdaae9beeb82b31 | 
	https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14263 | 
	bug | 
	[Bug]: KeyError: "do_not_save" when trying to save a prompt | 
	### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
When I try to save a prompt, it errors in the console saying
```
  File "/home/ciel/stable-diffusion/stable-diffusion-webui/modules/styles.py", line 212, in save_styles
    style_paths.remove("do_not_save")
KeyError: 'do_not_save'
```
and the file is not modified
I manually commented it out and it doesn't seem to break anything, except that it is saved to styles.csv.csv instead of styles.csv
### Steps to reproduce the problem
Try to save a prompt
### What should have happened?
Save into style.csv with no error
### Sysinfo
{
    "Platform": "Linux-6.6.4-zen1-1-zen-x86_64-with-glibc2.38",
    "Python": "3.11.4",
    "Version": "v1.7.0-RC-5-gf92d6149",
    "Commit": "f92d61497a426a19818625c3ccdaae9beeb82b31",
    "Script path": "/home/ciel/stable-diffusion/stable-diffusion-webui",
    "Data path": "/home/ciel/stable-diffusion/stable-diffusion-webui",
    "Extensions dir": "/home/ciel/stable-diffusion/stable-diffusion-webui/extensions",
    "Checksum": "e15aad6adb98a2a0ad13cad2b45b61b03565ef4f258783021da82b4ef7f37fa9",
    "Commandline": [
        "launch.py"
    ],
    "Torch env info": {
        "torch_version": "2.2.0",
        "is_debug_build": "False",
        "cuda_compiled_version": "N/A",
        "gcc_version": "(GCC) 13.2.1 20230801",
        "clang_version": "16.0.6",
        "cmake_version": "version 3.26.4",
        "os": "Arch Linux (x86_64)",
        "libc_version": "glibc-2.38",
        "python_version": "3.11.4 (main, Jul  5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)",
        "python_platform": "Linux-6.6.4-zen1-1-zen-x86_64-with-glibc2.38",
        "is_cuda_available": "True",
        "cuda_runtime_version": null,
        "cuda_module_loading": "LAZY",
        "nvidia_driver_version": null,
        "nvidia_gpu_models": "AMD Radeon RX 7900 XTX (gfx1100)",
        "cudnn_version": null,
        "pip_version": "pip3",
        "pip_packages": [
            "numpy==1.23.5",
            "open-clip-torch==2.20.0",
            "pytorch-lightning==1.9.4",
            "pytorch-triton-rocm==2.1.0+dafe145982",
            "torch==2.2.0.dev20231208+rocm5.6",
            "torchdiffeq==0.2.3",
            "torchmetrics==1.2.1",
            "torchsde==0.2.6",
            "torchvision==0.17.0.dev20231208+rocm5.6"
        ],
        "conda_packages": [
            "numpy                     1.26.2          py311h24aa872_0  ",
            "numpy-base                1.26.2          py311hbfb1bba_0  ",
            "open-clip-torch           2.20.0                   pypi_0    pypi",
            "pytorch-lightning         1.9.4                    pypi_0    pypi",
            "pytorch-triton-rocm       2.1.0+dafe145982          pypi_0    pypi",
            "torch                     2.2.0.dev20231208+rocm5.7          pypi_0    pypi",
            "torchaudio                2.2.0.dev20231208+rocm5.7          pypi_0    pypi",
            "torchdiffeq               0.2.3                    pypi_0    pypi",
            "torchmetrics              1.2.1                    pypi_0    pypi",
            "torchsde                  0.2.5                    pypi_0    pypi",
            "torchvision               0.17.0.dev20231208+rocm5.7          pypi_0    pypi"
        ],
        "hip_compiled_version": "5.6.31061-8c743ae5d",
        "hip_runtime_version": "5.6.31061",
        "miopen_runtime_version": "2.20.0",
        "caching_allocator_config": "",
        "is_xnnpack_available": "True",
        "cpu_info": [
            "Architecture:                       x86_64",
            "CPU op-mode(s):                     32-bit, 64-bit",
            "Address sizes:                      48 bits physical, 48 bits virtual",
            "Byte Order:                         Little Endian",
            "CPU(s):                             32",
            "On-line CPU(s) list:                0-31",
            "Vendor ID:                          AuthenticAMD",
            "Model name:                         AMD Ryzen 9 5950X 16-Core Processor",
            "CPU family:                         25",
            "Model:                              33",
            "Thread(s) per core:                 2",
            "Core(s) per socket:                 16",
            "Socket(s):                          1",
            "Stepping:                           0",
            "Frequency boost:                    disabled",
            "CPU(s) scaling MHz:                 49%",
            "CPU max MHz:                        6279.4922",
            "CPU min MHz:                        2200.0000",
            "BogoMIPS:                           8383.88",
            "Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap",
            "Virtualization:                     AMD-V",
            "L1d cache:                          512 KiB (16 instances)",
            "L1i cache:                          512 KiB (16 instances)",
            "L2 cache:                           8 MiB (16 instances)",
            "L3 cache:                           64 MiB (2 instances)",
            "NUMA node(s):                       1",
            "NUMA node0 CPU(s):                  0-31",
            "Vulnerability Gather data sampling: Not affected",
            "Vulnerability Itlb multihit:        Not affected",
            "Vulnerability L1tf:                 Not affected",
            "Vulnerability Mds:                  Not affected",
            "Vulnerability Meltdown:             Not affected",
            "Vulnerability Mmio stale data:      Not affected",
            "Vulnerability Retbleed:             Not affected",
            "Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode",
            "Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl",
            "Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization",
            "Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected",
            "Vulnerability Srbds:                Not affected",
            "Vulnerability Tsx async abort:      Not affected"
        ]
    },
    "Exceptions": [],
    "CPU": {
        "model": "",
        "count logical": 32,
        "count physical": 16
    },
    "RAM": {
        "total": "31GB",
        "used": "6GB",
        "free": "20GB",
        "active": "7GB",
        "inactive": "2GB",
        "buffers": "172MB",
        "cached": "5GB",
        "shared": "199MB"
    },
    "Extensions": [
        {
            "name": "clip-interrogator-ext",
            "path": "/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/clip-interrogator-ext",
            "version": "0f1a4591",
            "branch": "main",
            "remote": "https://github.com/pharmapsychotic/clip-interrogator-ext.git"
        },
        {
            "name": "latent-upscale",
            "path": "/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/latent-upscale",
            "version": "b9f75f44",
            "branch": "main",
            "remote": "https://github.com/feynlee/latent-upscale.git"
        },
        {
            "name": "sd-webui-controlnet",
            "path": "/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/sd-webui-controlnet",
            "version": "feea1f65",
            "branch": "main",
            "remote": "https://github.com/Mikubill/sd-webui-controlnet.git"
        },
        {
            "name": "ultimate-upscale-for-automatic1111",
            "path": "/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/ultimate-upscale-for-automatic1111",
            "version": "728ffcec",
            "branch": "master",
            "remote": "https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git"
        }
    ],
    "Inactive extensions": [],
    "Environment": {
        "GIT": "git",
        "GRADIO_ANALYTICS_ENABLED": "False",
        "TORCH_COMMAND": "pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm5.6"
    },
    "Config": {
        "samples_save": true,
        "samples_format": "png",
        "samples_filename_pattern": "",
        "save_images_add_number": true,
        "save_images_replace_action": "Replace",
        "grid_save": true,
        "grid_format": "png",
        "grid_extended_filename": false,
        "grid_only_if_multiple": true,
        "grid_prevent_empty_spots": false,
        "grid_zip_filename_pattern": "",
        "n_rows": -1,
        "font": "",
        "grid_text_active_color": "#000000",
        "grid_text_inactive_color": "#999999",
        "grid_background_color": "#ffffff",
        "save_images_before_face_restoration": false,
        "save_images_before_highres_fix": false,
        "save_images_before_color_correction": false,
        "save_mask": false,
        "save_mask_composite": false,
        "jpeg_quality": 80,
        "webp_lossless": false,
        "export_for_4chan": true,
        "img_downscale_threshold": 4.0,
        "target_side_length": 4000,
        "img_max_size_mp": 200,
        "use_original_name_batch": true,
        "use_upscaler_name_as_suffix": false,
        "save_selected_only": true,
        "save_init_img": false,
        "temp_dir": "",
        "clean_temp_dir_at_start": false,
        "save_incomplete_images": false,
        "notification_audio": true,
        "notification_volume": 100,
        "outdir_samples": "",
        "outdir_txt2img_samples": "outputs/txt2img-images",
        "outdir_img2img_samples": "outputs/img2img-images",
        "outdir_extras_samples": "outputs/extras-images",
        "outdir_grids": "",
        "outdir_txt2img_grids": "outputs/txt2img-grids",
        "outdir_img2img_grids": "outputs/img2img-grids",
        "outdir_save": "log/images",
        "outdir_init_images": "outputs/init-images",
        "save_to_dirs": true,
        "grid_save_to_dirs": true,
        "use_save_to_dirs_for_ui": false,
        "directories_filename_pattern": "[date]",
        "directories_max_prompt_words": 8,
        "ESRGAN_tile": 192,
        "ESRGAN_tile_overlap": 8,
        "realesrgan_enabled_models": [
            "R-ESRGAN 4x+",
            "R-ESRGAN 4x+ Anime6B"
        ],
        "upscaler_for_img2img": null,
        "face_restoration": false,
        "face_restoration_model": "CodeFormer",
        "code_former_weight": 0.5,
        "face_restoration_unload": false,
        "auto_launch_browser": "Local",
        "enable_console_prompts": false,
        "show_warnings": false,
        "show_gradio_deprecation_warnings": true,
        "memmon_poll_rate": 8,
        "samples_log_stdout": false,
        "multiple_tqdm": true,
        "print_hypernet_extra": false,
        "list_hidden_files": true,
        "disable_mmap_load_safetensors": false,
        "hide_ldm_prints": true,
        "dump_stacks_on_signal": false,
        "api_enable_requests": true,
        "api_forbid_local_requests": true,
        "api_useragent": "",
        "unload_models_when_training": false,
        "pin_memory": false,
        "save_optimizer_state": false,
        "save_training_settings_to_txt": true,
        "dataset_filename_word_regex": "",
        "dataset_filename_join_string": " ",
        "training_image_repeats_per_epoch": 1,
        "training_write_csv_every": 500,
        "training_xattention_optimizations": false,
        "training_enable_tensorboard": false,
        "training_tensorboard_save_images": false,
        "training_tensorboard_flush_every": 120,
        "sd_model_checkpoint": "AOM3A1B_orangemixs.safetensors [5493a0ec49]",
        "sd_checkpoints_limit": 1,
        "sd_checkpoints_keep_in_cpu": true,
        "sd_checkpoint_cache": 0,
        "sd_unet": "Automatic",
        "enable_quantization": false,
        "enable_emphasis": true,
        "enable_batch_seeds": true,
        "comma_padding_backtrack": 20,
        "CLIP_stop_at_last_layers": 1,
        "upcast_attn": true,
        "randn_source": "GPU",
        "tiling": false,
        "hires_fix_refiner_pass": "second pass",
        "sdxl_crop_top": 0,
        "sdxl_crop_left": 0,
        "sdxl_refiner_low_aesthetic_score": 2.5,
        "sdxl_refiner_high_aesthetic_score": 6.0,
        "sd_vae_checkpoint_cache": 1,
        "sd_vae": "orangemix.vae.pt",
        "sd_vae_overrides_per_model_preferences": true,
        "auto_vae_precision": true,
        "sd_vae_encode_method": "Full",
        "sd_vae_decode_method": "Full",
        "inpainting_mask_weight": 1.0,
        "initial_noise_multiplier": 1.0,
        "img2img_extra_noise": 0.0,
        "img2img_color_correction": false,
        "img2img_fix_steps": false,
        "img2img_background_color": "#ffffff",
        "img2img_editor_height": 720,
        "img2img_sketch_default_brush_color": "#ffffff",
        "img2img_inpaint_mask_brush_color": "#ffffff",
        "img2img_inpaint_sketch_default_brush_color": "#ffffff",
        "return_mask": false,
        "return_mask_composite": false,
        "img2img_batch_show_results_limit": 32,
        "cross_attention_optimization": "Automatic",
        "s_min_uncond": 0.0,
        "token_merging_ratio": 0.0,
        "token_merging_ratio_img2img": 0.0,
        "token_merging_ratio_hr": 0.0,
        "pad_cond_uncond": false,
        "persistent_cond_cache": true,
        "batch_cond_uncond": true,
        "use_old_emphasis_implementation": false,
        "use_old_karras_scheduler_sigmas": false,
        "no_dpmpp_sde_batch_determinism": false,
        "use_old_hires_fix_width_height": false,
        "dont_fix_second_order_samplers_schedule": false,
        "hires_fix_use_firstpass_conds": false,
        "use_old_scheduling": false,
        "interrogate_keep_models_in_memory": false,
        "interrogate_return_ranks": false,
        "interrogate_clip_num_beams": 1,
        "interrogate_clip_min_length": 24,
        "interrogate_clip_max_length": 48,
        "interrogate_clip_dict_limit": 1500,
        "interrogate_clip_skip_categories": [],
        "interrogate_deepbooru_score_threshold": 0.5,
        "deepbooru_sort_alpha": true,
        "deepbooru_use_spaces": true,
        "deepbooru_escape": true,
        "deepbooru_filter_tags": "",
        "extra_networks_show_hidden_directories": true,
        "extra_networks_dir_button_function": false,
        "extra_networks_hidden_models": "When searched",
        "extra_networks_default_multiplier": 1.0,
        "extra_networks_card_width": 0,
        "extra_networks_card_height": 0,
        "extra_networks_card_text_scale": 1.0,
        "extra_networks_card_show_desc": true,
        "extra_networks_card_order_field": "Path",
        "extra_networks_card_order": "Ascending",
        "extra_networks_add_text_separator": " ",
        "ui_extra_networks_tab_reorder": "",
        "textual_inversion_print_at_load": false,
        "textual_inversion_add_hashes_to_infotext": true,
        "sd_hypernetwork": "None",
        "keyedit_precision_attention": 0.1,
        "keyedit_precision_extra": 0.05,
        "keyedit_delimiters": ".,\\/!?%^*;:{}=`~() ",
        "keyedit_delimiters_whitespace": [
            "Tab",
            "Carriage Return",
            "Line Feed"
        ],
        "disable_token_counters": false,
        "return_grid": true,
        "do_not_show_images": false,
        "js_modal_lightbox": true,
        "js_modal_lightbox_initially_zoomed": true,
        "js_modal_lightbox_gamepad": false,
        "js_modal_lightbox_gamepad_repeat": 250,
        "gallery_height": "",
        "compact_prompt_box": false,
        "samplers_in_dropdown": true,
        "dimensions_and_batch_together": true,
        "sd_checkpoint_dropdown_use_short": false,
        "hires_fix_show_sampler": false,
        "hires_fix_show_prompts": false,
        "txt2img_settings_accordion": false,
        "img2img_settings_accordion": false,
        "localization": "None",
        "quicksettings_list": [
            "sd_model_checkpoint"
        ],
        "ui_tab_order": [],
        "hidden_tabs": [],
        "ui_reorder_list": [],
        "gradio_theme": "Default",
        "gradio_themes_cache": true,
        "show_progress_in_title": true,
        "send_seed": true,
        "send_size": true,
        "enable_pnginfo": true,
        "save_txt": false,
        "add_model_name_to_info": true,
        "add_model_hash_to_info": true,
        "add_vae_name_to_info": true,
        "add_vae_hash_to_info": true,
        "add_user_name_to_info": false,
        "add_version_to_infotext": true,
        "disable_weights_auto_swap": true,
        "infotext_skip_pasting": [],
        "infotext_styles": "Apply if any",
        "show_progressbar": true,
        "live_previews_enable": false,
        "live_previews_image_format": "png",
        "show_progress_grid": true,
        "show_progress_every_n_steps": 5,
        "show_progress_type": "Approx NN",
        "live_preview_allow_lowvram_full": false,
        "live_preview_content": "Prompt",
        "live_preview_refresh_period": 300.0,
        "live_preview_fast_interrupt": false,
        "hide_samplers": [],
        "eta_ddim": 0.0,
        "eta_ancestral": 1.0,
        "ddim_discretize": "uniform",
        "s_churn": 0.0,
        "s_tmin": 0.0,
        "s_tmax": 0.0,
        "s_noise": 1.0,
        "k_sched_type": "Automatic",
        "sigma_min": 0.0,
        "sigma_max": 0.0,
        "rho": 0.0,
        "eta_noise_seed_delta": 0,
        "always_discard_next_to_last_sigma": false,
        "sgm_noise_multiplier": false,
        "uni_pc_variant": "bh1",
        "uni_pc_skip_type": "time_uniform",
        "uni_pc_order": 3,
        "uni_pc_lower_order_final": true,
        "postprocessing_enable_in_main_ui": [],
        "postprocessing_operation_order": [],
        "upscaling_max_images_in_cache": 5,
        "postprocessing_existing_caption_action": "Ignore",
        "disabled_extensions": [],
        "disable_all_extensions": "none",
        "restore_config_state_file": "",
        "sd_checkpoint_hash": "5493a0ec491f5961dbdc1c861404088a6ae9bd4007f6a3a7c5dee8789cdc1361",
        "ldsr_steps": 100,
        "ldsr_cached": false,
        "SCUNET_tile": 256,
        "SCUNET_tile_overlap": 8,
        "SWIN_tile": 192,
        "SWIN_tile_overlap": 8,
        "SWIN_torch_compile": false,
        "hypertile_enable_unet": false,
        "hypertile_enable_unet_secondpass": false,
        "hypertile_max_depth_unet": 3,
        "hypertile_max_tile_unet": 256,
        "hypertile_swap_size_unet": 3,
        "hypertile_enable_vae": false,
        "hypertile_max_depth_vae": 3,
        "hypertile_max_tile_vae": 128,
        "hypertile_swap_size_vae": 3,
        "control_net_detectedmap_dir": "detected_maps",
        "control_net_models_path": "",
        "control_net_modules_path": "",
        "control_net_unit_count": 3,
        "control_net_model_cache_size": 1,
        "control_net_inpaint_blur_sigma": 7,
        "control_net_no_high_res_fix": false,
        "control_net_no_detectmap": false,
        "control_net_detectmap_autosaving": false,
        "control_net_allow_script_control": false,
        "control_net_sync_field_args": true,
        "controlnet_show_batch_images_in_ui": false,
        "controlnet_increment_seed_during_batch": false,
        "controlnet_disable_openpose_edit": false,
        "controlnet_ignore_noninpaint_mask": false,
        "lora_functional": false,
        "sd_lora": "None",
        "lora_preferred_name": "Alias from file",
        "lora_add_hashes_to_infotext": true,
        "lora_show_all": false,
        "lora_hide_unknown_for_versions": [],
        "lora_in_memory_limit": 0,
        "extra_options_txt2img": [],
        "extra_options_img2img": [],
        "extra_options_cols": 1,
        "extra_options_accordion": false,
        "canvas_hotkey_zoom": "Alt",
        "canvas_hotkey_adjust": "Ctrl",
        "canvas_hotkey_move": "F",
        "canvas_hotkey_fullscreen": "S",
        "canvas_hotkey_reset": "R",
        "canvas_hotkey_overlap": "O",
        "canvas_show_tooltip": true,
        "canvas_auto_expand": true,
        "canvas_blur_prompt": false,
        "canvas_disabled_functions": [
            "Overlap"
        ]
    },
    "Startup": {
        "total": 11.257086753845215,
        "records": {
            "initial startup": 0.02352619171142578,
            "prepare environment/checks": 3.457069396972656e-05,
            "prepare environment/git version info": 0.009780406951904297,
            "prepare environment/torch GPU test": 2.7273693084716797,
            "prepare environment/clone repositores": 0.038356781005859375,
            "prepare environment/run extensions installers/sd-webui-controlnet": 0.14071893692016602,
            "prepare environment/run extensions installers/ultimate-upscale-for-automatic1111": 2.288818359375e-05,
            "prepare environment/run extensions installers/clip-interrogator-ext": 2.8869497776031494,
            "prepare environment/run extensions installers/latent-upscale": 5.626678466796875e-05,
            "prepare environment/run extensions installers": 3.0277533531188965,
            "prepare environment": 5.820652484893799,
            "launcher": 0.0008344650268554688,
            "import torch": 2.0337331295013428,
            "import gradio": 0.6256029605865479,
            "setup paths": 0.9430902004241943,
            "import ldm": 0.0025310516357421875,
            "import sgm": 2.384185791015625e-06,
            "initialize shared": 0.047745466232299805,
            "other imports": 0.5719733238220215,
            "opts onchange": 0.0002732276916503906,
            "setup SD model": 0.0003185272216796875,
            "setup codeformer": 0.07199668884277344,
            "setup gfpgan": 0.009232521057128906,
            "set samplers": 2.8371810913085938e-05,
            "list extensions": 0.0010488033294677734,
            "restore config state file": 5.4836273193359375e-06,
            "list SD models": 0.004712820053100586,
            "list localizations": 0.0001246929168701172,
            "load scripts/custom_code.py": 0.001154184341430664,
            "load scripts/img2imgalt.py": 0.0002789497375488281,
            "load scripts/loopback.py": 0.0001888275146484375,
            "load scripts/outpainting_mk_2.py": 0.0002484321594238281,
            "load scripts/poor_mans_outpainting.py": 0.0001766681671142578,
            "load scripts/postprocessing_caption.py": 0.0001506805419921875,
            "load scripts/postprocessing_codeformer.py": 0.00015020370483398438,
            "load scripts/postprocessing_create_flipped_copies.py": 0.00014519691467285156,
            "load scripts/postprocessing_focal_crop.py": 0.00043463706970214844,
            "load scripts/postprocessing_gfpgan.py": 0.00014495849609375,
            "load scripts/postprocessing_split_oversized.py": 0.00015592575073242188,
            "load scripts/postprocessing_upscale.py": 0.00021982192993164062,
            "load scripts/processing_autosized_crop.py": 0.0001621246337890625,
            "load scripts/prompt_matrix.py": 0.0001780986785888672,
            "load scripts/prompts_from_file.py": 0.0001876354217529297,
            "load scripts/sd_upscale.py": 0.00016450881958007812,
            "load scripts/xyz_grid.py": 0.0010995864868164062,
            "load scripts/ldsr_model.py": 0.11085081100463867,
            "load scripts/lora_script.py": 0.05980086326599121,
            "load scripts/scunet_model.py": 0.011086463928222656,
            "load scripts/swinir_model.py": 0.010489225387573242,
            "load scripts/hotkey_config.py": 0.0001678466796875,
            "load scripts/extra_options_section.py": 0.00020551681518554688,
            "load scripts/hypertile_script.py": 0.019654512405395508,
            "load scripts/hypertile_xyz.py": 8.058547973632812e-05,
            "load scripts/clip_interrogator_ext.py": 0.02592325210571289,
            "load scripts/latent_upscale.py": 0.0007441043853759766,
            "load scripts/adapter.py": 0.0003275871276855469,
            "load scripts/api.py": 0.12074923515319824,
            "load scripts/batch_hijack.py": 0.0005114078521728516,
            "load scripts/cldm.py": 0.00022983551025390625,
            "load scripts/controlmodel_ipadapter.py": 0.00032711029052734375,
            "load scripts/controlnet.py": 0.0494229793548584,
            "load scripts/controlnet_diffusers.py": 0.0001556873321533203,
            "load scripts/controlnet_lllite.py": 0.0001430511474609375,
            "load scripts/controlnet_lora.py": 0.00012731552124023438,
            "load scripts/controlnet_model_guess.py": 0.00011944770812988281,
            "load scripts/controlnet_version.py": 0.0001239776611328125,
            "load scripts/enums.py": 0.0003447532653808594,
            "load scripts/external_code.py": 6.246566772460938e-05,
            "load scripts/global_state.py": 0.0003178119659423828,
            "load scripts/hook.py": 0.0002903938293457031,
            "load scripts/infotext.py": 9.560585021972656e-05,
            "load scripts/logging.py": 0.00016260147094726562,
            "load scripts/lvminthin.py": 0.0001952648162841797,
            "load scripts/movie2movie.py": 0.00022029876708984375,
            "load scripts/processor.py": 0.00023818016052246094,
            "load scripts/utils.py": 0.00011324882507324219,
            "load scripts/xyz_grid_support.py": 0.0003902912139892578,
            "load scripts/ultimate-upscale.py": 0.00045228004455566406,
            "load scripts/refiner.py": 0.00011444091796875,
            "load scripts/seed.py": 0.00012302398681640625,
            "load scripts": 0.41962695121765137,
            "load upscalers": 0.001577138900756836,
            "refresh VAE": 0.0006160736083984375,
            "refresh textual inversion templates": 2.86102294921875e-05,
            "scripts list_optimizers": 0.00027680397033691406,
            "scripts list_unets": 4.76837158203125e-06,
            "reload hypernetworks": 0.0027685165405273438,
            "initialize extra networks": 0.004837512969970703,
            "scripts before_ui_callback": 0.00041604042053222656,
            "create ui": 0.4426920413970947,
            "gradio launch": 0.23865938186645508,
            "add APIs": 0.003912210464477539,
            "app_started_callback/lora_script.py": 0.0001537799835205078,
            "app_started_callback/clip_interrogator_ext.py": 0.0003566741943359375,
            "app_started_callback/api.py": 0.0010819435119628906,
            "app_started_callback": 0.001596689224243164
        }
    },
    "Packages": [
        "absl-py==2.0.0",
        "accelerate==0.21.0",
        "addict==2.4.0",
        "aenum==3.1.15",
        "aiofiles==23.2.1",
        "aiohttp==3.9.1",
        "aiosignal==1.3.1",
        "altair==5.2.0",
        "antlr4-python3-runtime==4.9.3",
        "anyio==3.7.1",
        "attrs==23.1.0",
        "basicsr==1.4.2",
        "beautifulsoup4==4.12.2",
        "blendmodes==2022",
        "boltons==23.1.1",
        "cachetools==5.3.2",
        "certifi==2022.12.7",
        "cffi==1.16.0",
        "charset-normalizer==2.1.1",
        "clean-fid==0.1.35",
        "click==8.1.7",
        "clip-interrogator==0.6.0",
        "clip==1.0",
        "contourpy==1.2.0",
        "cssselect2==0.7.0",
        "cycler==0.12.1",
        "deprecation==2.1.0",
        "einops==0.4.1",
        "facexlib==0.3.0",
        "fastapi==0.94.0",
        "ffmpy==0.3.1",
        "filelock==3.9.0",
        "filterpy==1.4.5",
        "flatbuffers==23.5.26",
        "fonttools==4.46.0",
        "frozenlist==1.4.0",
        "fsspec==2023.12.1",
        "ftfy==6.1.3",
        "future==0.18.3",
        "fvcore==0.1.5.post20221221",
        "gdown==4.7.1",
        "gfpgan==1.3.8",
        "gitdb==4.0.11",
        "gitpython==3.1.32",
        "google-auth-oauthlib==1.1.0",
        "google-auth==2.25.1",
        "gradio-client==0.5.0",
        "gradio==3.41.2",
        "grpcio==1.60.0",
        "h11==0.12.0",
        "httpcore==0.15.0",
        "httpx==0.24.1",
        "huggingface-hub==0.19.4",
        "idna==3.4",
        "imageio==2.33.0",
        "importlib-metadata==7.0.0",
        "importlib-resources==6.1.1",
        "inflection==0.5.1",
        "iopath==0.1.9",
        "jinja2==3.1.2",
        "jsonmerge==1.8.0",
        "jsonschema-specifications==2023.11.2",
        "jsonschema==4.20.0",
        "kiwisolver==1.4.5",
        "kornia==0.6.7",
        "lark==1.1.2",
        "lazy-loader==0.3",
        "lightning-utilities==0.10.0",
        "llvmlite==0.41.1",
        "lmdb==1.4.1",
        "lpips==0.1.4",
        "lxml==4.9.3",
        "markdown==3.5.1",
        "markupsafe==2.1.3",
        "matplotlib==3.8.2",
        "mediapipe==0.10.8",
        "mpmath==1.2.1",
        "multidict==6.0.4",
        "networkx==3.0rc1",
        "numba==0.58.1",
        "numpy==1.23.5",
        "oauthlib==3.2.2",
        "omegaconf==2.2.3",
        "open-clip-torch==2.20.0",
        "opencv-contrib-python==4.8.1.78",
        "opencv-python==4.8.1.78",
        "orjson==3.9.10",
        "packaging==23.2",
        "pandas==2.1.4",
        "piexif==1.1.3",
        "pillow==9.5.0",
        "pip==23.1.2",
        "platformdirs==4.1.0",
        "portalocker==2.8.2",
        "protobuf==3.20.0",
        "psutil==5.9.5",
        "pyasn1-modules==0.3.0",
        "pyasn1==0.5.1",
        "pycparser==2.21",
        "pydantic==1.10.13",
        "pydub==0.25.1",
        "pyparsing==3.1.1",
        "pysocks==1.7.1",
        "python-dateutil==2.8.2",
        "python-multipart==0.0.6",
        "pytorch-lightning==1.9.4",
        "pytorch-triton-rocm==2.1.0+dafe145982",
        "pytz==2023.3.post1",
        "pywavelets==1.5.0",
        "pyyaml==6.0.1",
        "realesrgan==0.3.0",
        "referencing==0.32.0",
        "regex==2023.10.3",
        "reportlab==4.0.7",
        "requests-oauthlib==1.3.1",
        "requests==2.28.1",
        "resize-right==0.0.2",
        "rpds-py==0.13.2",
        "rsa==4.9",
        "safetensors==0.3.1",
        "scikit-image==0.21.0",
        "scipy==1.11.4",
        "semantic-version==2.10.0",
        "sentencepiece==0.1.99",
        "setuptools==65.5.0",
        "six==1.16.0",
        "smmap==5.0.1",
        "sniffio==1.3.0",
        "sounddevice==0.4.6",
        "soupsieve==2.5",
        "starlette==0.26.1",
        "svglib==1.5.1",
        "sympy==1.11.1",
        "tabulate==0.9.0",
        "tb-nightly==2.16.0a20231208",
        "tensorboard-data-server==0.7.2",
        "termcolor==2.4.0",
        "tf-keras-nightly==2.16.0.dev2023120810",
        "tifffile==2023.9.26",
        "timm==0.9.2",
        "tinycss2==1.2.1",
        "tokenizers==0.13.3",
        "tomesd==0.1.3",
        "tomli==2.0.1",
        "toolz==0.12.0",
        "torch==2.2.0.dev20231208+rocm5.6",
        "torchdiffeq==0.2.3",
        "torchmetrics==1.2.1",
        "torchsde==0.2.6",
        "torchvision==0.17.0.dev20231208+rocm5.6",
        "tqdm==4.66.1",
        "trampoline==0.1.2",
        "transformers==4.30.2",
        "typing-extensions==4.8.0",
        "tzdata==2023.3",
        "urllib3==1.26.13",
        "uvicorn==0.24.0.post1",
        "wcwidth==0.2.12",
        "webencodings==0.5.1",
        "websockets==11.0.3",
        "werkzeug==3.0.1",
        "yacs==0.1.8",
        "yapf==0.40.2",
        "yarl==1.9.4",
        "zipp==3.17.0"
    ]
}
### What browsers do you use to access the UI ?
Mozilla Firefox
### Console logs
```Shell
❯ ./webui.sh                                                         (base) 
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################
################################################################
Running on ciel user
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Using TCMalloc: libtcmalloc_minimal.so.4
Python 3.11.4 (main, Jul  5 2023, 13:45:01) [GCC 11.2.0]
Version: v1.7.0-RC-5-gf92d6149
Commit hash: f92d61497a426a19818625c3ccdaae9beeb82b31
Launching Web UI with arguments: 
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
2023-12-09 17:08:09,876 - ControlNet - INFO - ControlNet v1.1.422
ControlNet preprocessor location: /home/ciel/stable-diffusion/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
2023-12-09 17:08:09,921 - ControlNet - INFO - ControlNet v1.1.422
Loading weights [5493a0ec49] from /home/ciel/stable-diffusion/stable-diffusion-webui/models/Stable-diffusion/AOM3A1B_orangemixs.safetensors
Running on local URL:  http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Creating model from config: /home/ciel/stable-diffusion/stable-diffusion-webui/configs/v1-inference.yaml
Startup time: 8.9s (prepare environment: 4.0s, import torch: 2.0s, import gradio: 0.5s, setup paths: 0.8s, other imports: 0.5s, load scripts: 0.4s, create ui: 0.4s, gradio launch: 0.2s).
Loading VAE weights specified in settings: /home/ciel/stable-diffusion/stable-diffusion-webui/models/VAE/orangemix.vae.pt
Applying attention optimization: Doggettx... done.
Model loaded in 2.6s (load weights from disk: 0.6s, create model: 0.2s, apply weights to model: 1.4s, load VAE: 0.2s, calculate empty prompt: 0.1s).
Traceback (most recent call last):
  File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/home/ciel/stable-diffusion/stable-diffusion-webui/modules/ui_prompt_styles.py", line 27, in save_style
    shared.prompt_styles.save_styles(shared.styles_filename)
  File "/home/ciel/stable-diffusion/stable-diffusion-webui/modules/styles.py", line 212, in save_styles
    style_paths.remove("do_not_save")
KeyError: 'do_not_save'
```
### Additional information
I'm running dev branch due to the Navi3 bug, checking out master after launch seems to result in the same issue, but it could have just been jit-ed, didn't test very in-depth | null | 
	https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14276 | null | 
	{'base_commit': 'f92d61497a426a19818625c3ccdaae9beeb82b31', 'files': [{'path': 'modules/styles.py', 'status': 'modified', 'Loc': {"('StyleDatabase', '__init__', 95)": {'mod': [101, 102, 103, 104]}, "('StyleDatabase', None, 94)": {'mod': [158, 159, 160, 161]}, "('StyleDatabase', 'get_style_paths', 158)": {'mod': [175, 177]}, "('StyleDatabase', 'save_styles', 195)": {'mod': [199, 200, 201, 202, 204, 205, 206, 207, 208, 209, 211, 212]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "1",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "modules/styles.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	home-assistant | 
	core | 
	c3e9c1a7e8fdc949b8e638d79ab476507ff92f18 | 
	https://github.com/home-assistant/core/issues/60067 | 
	integration: environment_canada
by-code-owner | 
	Environment Canada (EC) radar integration slowing Environment Canada servers | 
	### The problem
The `config_flow` change to the EC integration did not change the way the underlying radar retrieval works, but did enable radar for everyone. As a result the EC servers are getting far too many requests. We (the codeowners) have been working with EC to diagnose this issue and understand their concerns. 
We are doing two things (PR is in progress). Caching requests to the EC servers. Work so far shows that through caching we can reduce the number of requests by over 90%. This fix is in the integration dependency library.
Second, we are creating the radar (camera) entity with `_attr_entity_registry_enabled_default = False` so that new radar entities are disabled by default. Many people use the integration for forecast only.
Last, EC is putting a policy in place such that User Agent needs to be filled in to represent the calling library.
### What version of Home Assistant Core has the issue?
2021.12.0.dev0
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Core
### Integration causing the issue
Environment Canada
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/environment_canada/
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
_No response_
### Additional information
Quote from one of the email exchanges with EC:
> What we observed is 1350 unique IP addresses using this code which made 23.5 million requests over 5 days.
In order to respond to EC as quickly as possible we are asking for consideration to release the PR, when available, in the next dot release. | null | 
	https://github.com/home-assistant/core/pull/60087 | null | 
	{'base_commit': 'c3e9c1a7e8fdc949b8e638d79ab476507ff92f18', 'files': [{'path': 'homeassistant/components/environment_canada/camera.py', 'status': 'modified', 'Loc': {"('ECCamera', '__init__', 49)": {'add': [57]}}}, {'path': 'homeassistant/components/environment_canada/manifest.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}}}, {'path': 'requirements_all.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [603]}}}, {'path': 'requirements_test_all.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [372]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "2",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "homeassistant/components/environment_canada/camera.py",
    "homeassistant/components/environment_canada/manifest.json"
  ],
  "doc": [],
  "test": [],
  "config": [
    "requirements_all.txt",
    "requirements_test_all.txt"
  ],
  "asset": []
} | 1 | 
| 
	abi | 
	screenshot-to-code | 
	939539611f0cad12056f7be78ef6b2128b90b779 | 
	https://github.com/abi/screenshot-to-code/issues/336 | 
	bug
p2 | 
	Handle Nones in chunk.choices[0].delta | 
	
There is a successful request for the openai interface, but it seems that no code is generated.
backend-1   | ERROR:    Exception in ASGI application
backend-1   | Traceback (most recent call last):
backend-1   |   File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 250, in run_asgi
backend-1   |     result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
backend-1   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1   |   File "/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
backend-1   |     return await self.app(scope, receive, send)
backend-1   |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1   |   File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 276, in __call__
backend-1   |     await super().__call__(scope, receive, send)
backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/applications.py", line 122, in __call__
backend-1   |     await self.middleware_stack(scope, receive, send)
backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 149, in __call__
backend-1   |     await self.app(scope, receive, send)
backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/middleware/cors.py", line 75, in __call__
backend-1   |     await self.app(scope, receive, send)
backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
backend-1   |     raise exc
backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
backend-1   |     await self.app(scope, receive, sender)
backend-1   |   File "/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
backend-1   |     raise e
backend-1   |   File "/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
backend-1   |     await self.app(scope, receive, send)
backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 718, in __call__
backend-1   |     await route.handle(scope, receive, send)
backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 341, in handle
backend-1   |     await self.app(scope, receive, send)
backend-1   |   File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 82, in app
backend-1   |     await func(session)
backend-1   |   File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 289, in app
backend-1   |     await dependant.call(**values)
backend-1   |   File "/app/routes/generate_code.py", line 251, in stream_code
backend-1   |     completion = await stream_openai_response(
backend-1   |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1   |   File "/app/llm.py", line 62, in stream_openai_response
backend-1   |     content = chunk.choices[0].delta.content or ""
backend-1   |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1   | AttributeError: 'NoneType' object has no attribute 'content'
backend-1   | INFO:     connection closed
 | null | 
	https://github.com/abi/screenshot-to-code/pull/341 | null | 
	{'base_commit': '939539611f0cad12056f7be78ef6b2128b90b779', 'files': [{'path': 'backend/llm.py', 'status': 'modified', 'Loc': {"(None, 'stream_openai_response', 32)": {'mod': [62, 63, 64]}}}, {'path': 'frontend/package.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [49]}}}, {'path': 'frontend/src/App.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [381]}}}, {'path': 'frontend/yarn.lock', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5644, 5939]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "backend/llm.py",
    "frontend/src/App.tsx",
    "frontend/package.json"
  ],
  "doc": [],
  "test": [],
  "config": [
    "frontend/yarn.lock"
  ],
  "asset": []
} | 1 | 
| 
	Significant-Gravitas | 
	AutoGPT | 
	bf895eb656dee9084273cd36395828bd06aa231d | 
	https://github.com/Significant-Gravitas/AutoGPT/issues/6 | 
	enhancement
good first issue
API costs | 
	Make Auto-GPT aware of it's running cost | 
	Auto-GPT is expensive to run due to GPT-4's API cost.
We could experiment with making it aware of this fact, by tracking tokens as they are used and converting to a dollar cost. 
This could also be displayed to the user to help them be more aware of exactly how much they are spending. | null | 
	https://github.com/Significant-Gravitas/AutoGPT/pull/762 | null | 
	{'base_commit': 'bf895eb656dee9084273cd36395828bd06aa231d', 'files': [{'path': 'autogpt/chat.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "(None, 'chat_with_ai', 54)": {'add': [135]}}}, {'path': 'autogpt/config/ai_config.py', 'status': 'modified', 'Loc': {"('AIConfig', None, 21)": {'add': [28]}, "('AIConfig', '__init__', 31)": {'add': [40, 48], 'mod': [32]}, "('AIConfig', 'load', 53)": {'add': [75], 'mod': [55, 77]}, "('AIConfig', 'save', 79)": {'add': [94]}, "('AIConfig', 'construct_full_prompt', 99)": {'add': [149], 'mod': [110]}}}, {'path': 'autogpt/llm_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9]}, "(None, 'create_chat_completion', 56)": {'mod': [99, 107]}, "(None, 'create_embedding_with_ada', 156)": {'mod': [162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172]}}}, {'path': 'autogpt/memory/base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "(None, 'get_ada_embedding', 11)": {'mod': [13, 14, 15, 16, 17, 18, 19, 20, 21]}}}, {'path': 'autogpt/prompts/prompt.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "(None, 'construct_main_ai_config', 78)": {'add': [88, 100, 109]}}}, {'path': 'autogpt/setup.py', 'status': 'modified', 'Loc': {"(None, 'generate_aiconfig_automatic', 139)": {'add': [194], 'mod': [196]}, "(None, 'generate_aiconfig_manual', 70)": {'mod': [136]}}}, {'path': 'tests/unit/test_commands.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 10]}, "(None, 'test_make_agent', 11)": {'mod': [17, 20]}}}, {'path': 'tests/unit/test_setup.py', 'status': 'modified', 'Loc': {"('TestAutoGPT', 'test_generate_aiconfig_automatic_fallback', 39)": {'add': [46]}, "('TestAutoGPT', 'test_prompt_user_manual_mode', 57)": {'add': [64]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "autogpt/chat.py",
    "autogpt/prompts/prompt.py",
    "autogpt/config/ai_config.py",
    "autogpt/memory/base.py",
    "autogpt/setup.py",
    "autogpt/llm_utils.py"
  ],
  "doc": [],
  "test": [
    "tests/unit/test_commands.py",
    "tests/unit/test_setup.py"
  ],
  "config": [],
  "asset": []
} | 1 | 
| 
	yt-dlp | 
	yt-dlp | 
	3e01ce744a981d8f19ae77ec695005e7000f4703 | 
	https://github.com/yt-dlp/yt-dlp/issues/5855 | 
	bug | 
	Generic extractor can crash if Brotli is not available | 
	### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.11.11** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Testing #5851 in a configuration where no Brotli decoder was available showed the crash in the log.
The problem is this extractor code:
https://github.com/yt-dlp/yt-dlp/blob/1fc089143c79b02b8373ae1d785d5e3a68635d4d/yt_dlp/extractor/generic.py#L2306-L2318
Normally there is a check for a supported Brotli encoder (using `SUPPORTED_ENCODINGS`). Specifying `*` in the `Accept-encoding` header bypasses that check.
However, I don't think that `*` does what is wanted according to the comments in the above code. The code wants to get the resource with no decoding (because decoding in yt-dl[p] starts by reading the entire response), but `*` still allows the server to send a compressed response. What is wanted is the `identity` encoding which is the default if no other encoding is specified. Or, to re-cast the decoding process so that the whole response stream is not read before decoding, but that means creating stream decoding methods for Brotli and zlib.
Also, there could be a check for a supported encoding in `YoutubeDLHandler.http_response()`, perhaps synthesizing 416 or 406 id the server has sent an encoding that isn't supported, instead of the crash seen here.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-F', 'https://www.extra.cz/cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.11.11 [8b644025b] (source)
[debug] Lazy loading extractors is disabled
[debug] Plugins: ['SamplePluginIE', 'SamplePluginPP']
[debug] Git HEAD: c73355510
[debug] Python 3.9.15 (CPython i686 32bit) - Linux-4.4.0-210-generic-i686-with-glibc2.23 (OpenSSL 1.1.1s  1 Nov 2022, glibc 2.23)
[debug] exe versions: ffmpeg 4.3, ffprobe 4.3
[debug] Optional libraries: Cryptodome-3.11.0, certifi-2019.11.28, secretstorage-3.2.0, sqlite3-2.6.0
[debug] Proxy map: {}
[debug] Loaded 1735 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.11.11, Current version: 2022.11.11
yt-dlp is up to date (2022.11.11)
[generic] Extracting URL: https://www.extra.cz/cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867
[generic] cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867: Downloading webpage
ERROR: 'NoneType' object has no attribute 'decompress'
Traceback (most recent call last):
  File "/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py", line 1495, in wrapper
    return func(self, *args, **kwargs)
  File "/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py", line 1571, in __extract_info
    ie_result = ie.extract(url)
  File "/home/df/Documents/src/yt-dlp/yt_dlp/extractor/common.py", line 680, in extract
    ie_result = self._real_extract(url)
  File "/home/df/Documents/src/yt-dlp/yt_dlp/extractor/generic.py", line 2314, in _real_extract
    full_response = self._request_webpage(url, video_id, headers={
  File "/home/df/Documents/src/yt-dlp/yt_dlp/extractor/common.py", line 807, in _request_webpage
    return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
  File "/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py", line 3719, in urlopen
    return self._opener.open(req, timeout=self._socket_timeout)
  File "/usr/lib/python3.9/urllib/request.py", line 523, in open
    response = meth(req, response)
  File "/home/df/Documents/src/yt-dlp/yt_dlp/utils.py", line 1452, in http_response
    io.BytesIO(self.brotli(resp.read())), old_resp.headers, old_resp.url, old_resp.code)
  File "/home/df/Documents/src/yt-dlp/yt_dlp/utils.py", line 1389, in brotli
    return brotli.decompress(data)
AttributeError: 'NoneType' object has no attribute 'decompress'
```
 | null | null | null | 
	{'base_commit': '3e01ce744a981d8f19ae77ec695005e7000f4703', 'files': [{'path': 'yt_dlp/extractor/generic.py', 'status': 'modified', 'Loc': {"('GenericIE', None, 42)": {'add': [2156]}, "('GenericIE', '_real_extract', 2276)": {'mod': [2315]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "2",
  "loc_way": "commit",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "yt_dlp/extractor/generic.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | null | 
| 
	CorentinJ | 
	Real-Time-Voice-Cloning | 
	ded7b37234e229d9bde0a9a506f7c65605803731 | 
	https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/543 | 
	Lack of pre-compiled results in lost interest | 
	so I know the first thing people are going to say is, this isn't an issue. However, it is. by not having a precompiled version to download over half the people that find their way to this GitHub are going to lose interest.  Honestly, I'm one of them. I attempted to compile it but then I saw that I had to track down each module for this, yeah quickly drove me away from it.  all I wanted to do was mess around and see what it can do. even if the results arent mind-blowing the concept interests me.  but due to not having a ready to use executable I like many others I'm sure of, have decided it isn't even worth messing with.  | null | 
	https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/546 | null | 
	{'base_commit': 'ded7b37234e229d9bde0a9a506f7c65605803731', 'files': [{'path': 'toolbox/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [11]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "toolbox/ui.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | |
| 
	scikit-learn | 
	scikit-learn | 
	96b5814de70ad2435b6db5f49b607b136921f701 | 
	https://github.com/scikit-learn/scikit-learn/issues/26948 | 
	Documentation | 
	The copy button on install copies an extensive comman including env activation | 
	### Describe the issue linked to the documentation
https://scikit-learn.org/stable/install.html
Above link will lead you to the sklearn downlanding for link . 
when you link copy link button it will copy 
`python3 -m venv sklearn-venvpython -m venv sklearn-venvpython -m venv sklearn-venvsource sklearn-venv/bin/activatesource sklearn-venv/bin/activatesklearn-venv\Scripts\activatepip install -U scikit-learnpip install -U scikit-learnpip install -U scikit-learnpip3 install -U scikit-learnconda create -n sklearn-env -c conda-forge scikit-learnconda activate sklearn-env`
instead of  `pip3 install -U scikit-learn`
if this is the issue so please issue i want to create a pull request for it and tell in which file this issue reside
Thanks
### Suggest a potential alternative/fix
By resoving above issue | null | 
	https://github.com/scikit-learn/scikit-learn/pull/27052 | null | 
	{'base_commit': '96b5814de70ad2435b6db5f49b607b136921f701', 'files': [{'path': 'doc/install.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107]}}}, {'path': 'doc/themes/scikit-learn-modern/static/css/theme.css', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1216, 1220, 1225, 1233, 1236, 1239, 1243, 1247], 'mod': [1208, 1209]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "2",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "doc/themes/scikit-learn-modern/static/css/theme.css"
  ],
  "doc": [
    "doc/install.rst"
  ],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	keras-team | 
	keras | 
	49b9682b3570211c7d8f619f8538c08fd5d8bdad | 
	https://github.com/keras-team/keras/issues/10036 | 
	[API DESIGN REVIEW] sample weight in ImageDataGenerator.flow | 
	https://docs.google.com/document/d/14anankKROhliJCpInQH-pITatdjO9UzSN6Iz0MwcDHw/edit?usp=sharing
Makes it easy to use data augmentation when sample weights are available.  | null | 
	https://github.com/keras-team/keras/pull/10092 | null | 
	{'base_commit': '49b9682b3570211c7d8f619f8538c08fd5d8bdad', 'files': [{'path': 'keras/preprocessing/image.py', 'status': 'modified', 'Loc': {"('ImageDataGenerator', 'flow', 715)": {'add': [734, 759], 'mod': [754]}, "('NumpyArrayIterator', None, 1188)": {'add': [1201]}, "('NumpyArrayIterator', '__init__', 1216)": {'add': [1241, 1278], 'mod': [1217, 1218]}, "('NumpyArrayIterator', '_get_batches_of_transformed_samples', 1289)": {'add': [1313]}, "('ImageDataGenerator', None, 443)": {'mod': [715]}}}, {'path': 'tests/keras/preprocessing/image_test.py', 'status': 'modified', 'Loc': {"('TestImage', 'test_image_data_generator', 32)": {'add': [64]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "tests/keras/preprocessing/image_test.py",
    "keras/preprocessing/image.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | |
| 
	scrapy | 
	scrapy | 
	efb53aafdcaae058962c6189ddecb3dc62b02c31 | 
	https://github.com/scrapy/scrapy/issues/6514 | 
	enhancement | 
	Migrate from setup.py to pyproject.toml | 
	We should migrate to the modern declarative setuptools metadata approach as discussed in https://setuptools.pypa.io/en/latest/userguide/quickstart.html and https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html, but only after the 2.12 release. | null | 
	https://github.com/scrapy/scrapy/pull/6547 | null | 
	{'base_commit': 'efb53aafdcaae058962c6189ddecb3dc62b02c31', 'files': [{'path': '.bandit.yml', 'status': 'removed', 'Loc': {}}, {'path': '.bumpversion.cfg', 'status': 'removed', 'Loc': {}}, {'path': '.coveragerc', 'status': 'removed', 'Loc': {}}, {'path': '.isort.cfg', 'status': 'removed', 'Loc': {}}, {'path': '.pre-commit-config.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}, {'path': 'MANIFEST.in', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [13]}}}, {'path': 'pylintrc', 'status': 'removed', 'Loc': {}}, {'path': 'pytest.ini', 'status': 'removed', 'Loc': {}}, {'path': 'setup.cfg', 'status': 'removed', 'Loc': {}}, {'path': 'setup.py', 'status': 'removed', 'Loc': {}}, {'path': 'tests/test_crawler.py', 'status': 'modified', 'Loc': {"('CrawlerProcessSubprocess', 'test_shutdown_forced', 890)": {'mod': [902]}}}, {'path': 'tests/test_spiderloader/__init__.py', 'status': 'modified', 'Loc': {"('SpiderLoaderTest', 'test_syntax_error_warning', 146)": {'mod': [147, 148, 149]}}}, {'path': 'tox.ini', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [82]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "tests/test_spiderloader/__init__.py",
    ".isort.cfg",
    ".coveragerc",
    "setup.cfg",
    "setup.py",
    ".bumpversion.cfg"
  ],
  "doc": [],
  "test": [
    "tests/test_crawler.py"
  ],
  "config": [
    "pytest.ini",
    ".pre-commit-config.yaml",
    "tox.ini",
    "pylintrc",
    ".bandit.yml",
    "MANIFEST.in"
  ],
  "asset": []
} | 1 | 
| 
	fastapi | 
	fastapi | 
	c6e950dc9cacefd692dbd8987a3acd12a44b506f | 
	https://github.com/fastapi/fastapi/issues/5859 | 
	question
question-migrate | 
	FastAPI==0.89.0 Cannot use `None` as a return type when `status_code` is set to 204 with `from __future__ import annotations` | 
	### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from __future__ import annotations 
from fastapi import FastAPI
app = FastAPI()
@app.get("/", status_code=204)
def read_root() -> None:
    return {"Hello": "World"}
```
### Description
If we add:
`from __future__ import annotations`
It changes the annotations structure and the response model is `NoneType` instead of `None`, which causes validation of the `statuc_code` vs `response_model` and raises an exception.
```python
  ...
  File ".../site-packages/fastapi/routing.py", line 635, in decorator
    self.add_api_route(
  File ".../site-packages/fastapi/routing.py", line 574, in add_api_route
    route = route_class(
  File ".../site-packages/fastapi/routing.py", line 398, in __init__
    assert is_body_allowed_for_status_code(
AssertionError: Status code 204 must not have a response body
```
I am working on a fix for it right now.
### Operating System
macOS
### Operating System Details
_No response_
### FastAPI Version
0.89.0
### Python Version
3.10
### Additional Context
_No response_ | null | 
	https://github.com/fastapi/fastapi/pull/2246 | null | 
	{'base_commit': 'c6e950dc9cacefd692dbd8987a3acd12a44b506f', 'files': [{'path': '.github/workflows/preview-docs.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "1",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [],
  "doc": [
    ".github/workflows/preview-docs.yml"
  ],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	3b1b | 
	manim | 
	3938f81c1b4a5ee81d5bfc6563c17a225f7e5068 | 
	https://github.com/3b1b/manim/issues/1330 | 
	Error after installing manim | 
	I installed all manim & dependecies, but when I ran `python -m manim example_scenes.py OpeningManimExample`, I got the following error:
`Traceback (most recent call last):
  File "c:\users\jm\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "c:\users\jm\anaconda3\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\jm\Documents\work\manim_new\manim\manim.py", line 5, in <module>
    manimlib.main()
  File "C:\Users\jm\Documents\work\manim_new\manim\manimlib\__init__.py", line 9, in main
    scenes = manimlib.extract_scene.main(config)
  File "C:\Users\jm\Documents\work\manim_new\manim\manimlib\extract_scene.py", line 113, in main
    scenes = get_scenes_to_render(all_scene_classes, scene_config, config)
  File "C:\Users\jm\Documents\work\manim_new\manim\manimlib\extract_scene.py", line 74, in get_scenes_to_render
    scene = scene_class(**scene_config)
  File "C:\Users\jm\Documents\work\manim_new\manim\manimlib\scene\scene.py", line 44, in __init__
    self.window = Window(self, **self.window_config)
  File "C:\Users\jm\Documents\work\manim_new\manim\manimlib\window.py", line 19, in __init__
    super().__init__(**kwargs)
  File "C:\Users\jm\Envs\manim.new\lib\site-packages\moderngl_window\context\pyglet\window.py", line 51, in __init__
    self._window = PygletWrapper(
  File "C:\Users\jm\Envs\manim.new\lib\site-packages\pyglet\window\win32\__init__.py", line 134, in __init__
    super(Win32Window, self).__init__(*args, **kwargs)
  File "C:\Users\jm\Envs\manim.new\lib\site-packages\pyglet\window\__init__.py", line 603, in __init__
    config = screen.get_best_config(config)
  File "C:\Users\jm\Envs\manim.new\lib\site-packages\pyglet\canvas\base.py", line 194, in get_best_config
    raise window.NoSuchConfigException()
pyglet.window.NoSuchConfigException`.
Any advice? And thank you | null | 
	https://github.com/3b1b/manim/pull/1343 | null | 
	{'base_commit': '3938f81c1b4a5ee81d5bfc6563c17a225f7e5068', 'files': [{'path': 'manimlib/window.py', 'status': 'modified', 'Loc': {"('Window', None, 10)": {'mod': [15]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "1",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "manimlib/window.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | null | |
| 
	keras-team | 
	keras | 
	84b283e6200bcb051ed976782fbb2b123bf9b8fc | 
	https://github.com/keras-team/keras/issues/19793 | 
	type:bug/performance | 
	model.keras format much slower to load | 
	Anyone experiencing unreasonably slow load times when loading a keras-format saved model? I have noticed this repeated when working in ipython, where simply instantiating a model via `Model.from_config` then calling `model.load_weights` is much (several factors) faster than loading a `model.keras` file.
My understanding is the keras format is simply a zip file with the config.json file and weights h5 (iirc) but weirdly enough, there's something not right going on while loading. | null | 
	https://github.com/keras-team/keras/pull/19852 | null | 
	{'base_commit': '84b283e6200bcb051ed976782fbb2b123bf9b8fc', 'files': [{'path': 'keras/src/saving/saving_lib.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 34]}, "(None, '_save_model_to_fileobj', 95)": {'mod': [112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 127, 128, 129, 130, 131, 132, 133, 134, 135]}, "(None, '_load_model_from_fileobj', 157)": {'mod': [175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 186, 187, 188, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204]}, "(None, 'load_weights_only', 239)": {'mod': [253, 254, 255]}}}, {'path': 'keras/src/saving/saving_lib_test.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [614]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "2",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "keras/src/saving/saving_lib_test.py",
    "keras/src/saving/saving_lib.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	ansible | 
	ansible | 
	4cdb266dac852859f695b0555cbe49e58343e69a | 
	https://github.com/ansible/ansible/issues/3539 | 
	bug | 
	Bug in Conditional Include | 
	Hi,
I know that when using conditionals on an include, 'All the tasks get evaluated, but the conditional is applied to each and every task'.  However this breaks when some of that tasks register variables and other tasks in the group use those variable.
Example:
main.yml:
```
- include: extra.yml
  when: do_extra is defined
```
extra.yml:
```
- name: check if we can do task A
  shell: check_if_task_A_possible
  register: A_possible
  ignore_errors: yes
- name: task A
  shell: run_task_A
  when: A_possible.rc == 0
```
Now if you run main.yml and 'do_extra' is not defined, the run will fail on 'task A'  because when the 'when' condition is evaluated, the variable A_possible will not exist.
It is not sufficient to just add the top-level include conditional above the other because right now it looks like the two conditions are compounded and tested together which will still fail because A_possible is not defined.  I think you would have to run the file level conditional before the task level ones to keep this from happening.
 | null | 
	https://github.com/ansible/ansible/pull/20158 | null | 
	{'base_commit': '4cdb266dac852859f695b0555cbe49e58343e69a', 'files': [{'path': 'lib/ansible/modules/windows/win_robocopy.ps1', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [25, 26, 27, 28, 73, 76, 93, 94, 95, 114, 115, 167, 168]}}}, {'path': 'lib/ansible/modules/windows/win_robocopy.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [132]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "1",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "lib/ansible/modules/windows/win_robocopy.ps1",
    "lib/ansible/modules/windows/win_robocopy.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	psf | 
	requests | 
	f5dacf84468ab7e0631cc61a3f1431a32e3e143c | 
	https://github.com/psf/requests/issues/2654 | 
	Feature Request
Contributor Friendly | 
	utils.get_netrc_auth silently fails when netrc exists but fails to parse | 
	My .netrc contains a line for the github auth, [like this](https://gist.github.com/wikimatze/9790374).
It turns out that `netrc.netrc()` doesn't like that:
```
>>> from netrc import netrc
>>> netrc()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/netrc.py", line 35, in __init__
    self._parse(file, fp, default_netrc)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/netrc.py", line 117, in _parse
    file, lexer.lineno)
netrc.NetrcParseError: bad follower token 'protocol' (/Users/david/.netrc, line 9)
```
`get_netrc_auth` catches the `NetrcParseError` [but just ignores it](https://github.com/kennethreitz/requests/blob/master/requests/utils.py#L106).
At least having it emit a warning would have saved some hair-pulling.
 | null | 
	https://github.com/psf/requests/pull/2656 | null | 
	{'base_commit': 'f5dacf84468ab7e0631cc61a3f1431a32e3e143c', 'files': [{'path': 'requests/utils.py', 'status': 'modified', 'Loc': {"(None, 'get_netrc_auth', 70)": {'mod': [70, 108, 109]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "requests/utils.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	oobabooga | 
	text-generation-webui | 
	0877741b0350d200be7f1e6cca2780a25ee29cd0 | 
	https://github.com/oobabooga/text-generation-webui/issues/5851 | 
	bug | 
	Inference failing using ExLlamav2 version 0.0.18 | 
	### Describe the bug
Since ExLlamav2 was upgraded to version 0.0.18 in the requirements.txt, inference using it is no longer working and fails with the error in the logs below.  Reverting to version 0.0.17 resolves the issue.
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
1. Install latest main branch (current commit is `26d822f64f2a029306b250b69dc58468662a4fc6`)
2. Download `GPTQ` model
3. Use `ExLlamav2_HF` model loader
4. Go to `Chat` tab and ask the AI a question.
5. Observe error, even though the model loaded successfully.
### Screenshot
_No response_
### Logs
```shell
21:35:11-061459 INFO     Loading "TheBloke_dolphin-2.6-mistral-7B-GPTQ"
21:35:13-842112 INFO     LOADER: "ExLlamav2"
21:35:13-843422 INFO     TRUNCATION LENGTH: 32768
21:35:13-844234 INFO     INSTRUCTION TEMPLATE: "Alpaca"
21:35:13-845014 INFO     Loaded the model in 2.78 seconds.
Traceback (most recent call last):
  File "/workspace/text-generation-webui/modules/text_generation.py", line 429, in generate_reply_custom
    for reply in shared.model.generate_with_streaming(question, state):
  File "/workspace/text-generation-webui/modules/exllamav2.py", line 140, in generate_with_streaming
    self.generator.begin_stream(ids, settings, loras=self.loras)
  File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py", line 198, in begin_stream
    self.begin_stream_ex(input_ids,
  File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py", line 296, in begin_stream_ex
    self._gen_begin_reuse(input_ids, gen_settings)
  File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py", line 624, in _gen_begin_reuse
    self._gen_begin(in_tokens, gen_settings)
  File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py", line 586, in _gen_begin
    self.model.forward(self.sequence_ids[:, :-1],
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/model.py", line 694, in forward
    r, ls = self._forward(input_ids = input_ids[:, chunk_begin : chunk_end],
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/model.py", line 776, in _forward
    x = module.forward(x, cache = cache, attn_params = attn_params, past_len = past_len, loras = loras, **kwargs)
  File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/attn.py", line 596, in forward
    attn_output = flash_attn_func(q_states, k_states, v_states, causal = True)
  File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 825, in flash_attn_func
    return FlashAttnFunc.apply(
  File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 553, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 507, in forward
    out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_forward(
  File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 51, in _flash_attn_forward
    out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.fwd(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### System Info
* Ubuntu 22.04 LTS
* Nvidia A5000 GPU on Runpod
* CUDA 12.1
 | null | null | null | 
	{'base_commit': '0877741b0350d200be7f1e6cca2780a25ee29cd0', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 59)': {'mod': [59, 60, 61, 62, 63]}}}, {'path': 'requirements_amd.txt', 'status': 'modified', 'Loc': {'(None, None, 45)': {'mod': [45, 46, 47]}}}, {'path': 'requirements_amd_noavx2.txt', 'status': 'modified', 'Loc': {'(None, None, 43)': {'mod': [43, 44, 45]}}}, {'path': 'requirements_apple_intel.txt', 'status': 'modified', 'Loc': {'(None, None, 41)': {'mod': [41]}}}, {'path': 'requirements_apple_silicon.txt', 'status': 'modified', 'Loc': {'(None, None, 43)': {'mod': [43]}}}, {'path': 'requirements_noavx2.txt', 'status': 'modified', 'Loc': {'(None, None, 59)': {'mod': [59, 60, 61, 62, 63]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "1",
  "loc_way": "commit",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [],
  "doc": [],
  "test": [],
  "config": [
    "requirements_apple_silicon.txt",
    "requirements_amd_noavx2.txt",
    "requirements_apple_intel.txt",
    "requirements_amd.txt",
    "requirements.txt",
    "requirements_noavx2.txt"
  ],
  "asset": []
} | null | 
| 
	zylon-ai | 
	private-gpt | 
	89477ea9d3a83181b0222b732a81c71db9edf142 | 
	https://github.com/zylon-ai/private-gpt/issues/2013 | 
	bug | 
	[BUG] Another permissions error when installing with docker-compose | 
	### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
This looks similar, but not the same as #1876
As for following the instructions, I've not seen any relevant guide to installing with Docker, hence working a bit blind. 
Background: I'm trying to run this on an Asustor NAS, which offers very little ability to customize the environment. Ideally, I'd just like to be able to run this by pasting a docker-compose file into Portainer, and having it work it's magic from there:
---
```
sal@halob:/volume1/home/sal/apps/private-gpt $ docker-compose up
[+] Running 3/3
 ✔ Network private-gpt_default          Created                                                                                                                                   0.1s
 ✔ Container private-gpt-ollama-1       Created                                                                                                                                   0.1s
 ✔ Container private-gpt-private-gpt-1  Created                                                                                                                                   0.1s
Attaching to ollama-1, private-gpt-1
ollama-1       | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
ollama-1       | Your new public key is:
ollama-1       |
ollama-1       | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNQkShAIoUDyyueUTiCHM9/AZfZ+rxnUZgmh+YByBVB
ollama-1       |
ollama-1       | 2024/07/23 23:20:28 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
ollama-1       | time=2024-07-23T23:20:28.317Z level=INFO source=images.go:778 msg="total blobs: 0"
ollama-1       | time=2024-07-23T23:20:28.317Z level=INFO source=images.go:785 msg="total unused blobs removed: 0"
ollama-1       | time=2024-07-23T23:20:28.317Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.2.6)"
ollama-1       | time=2024-07-23T23:20:28.318Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1112441504/runners
private-gpt-1  | 23:20:29.406 [INFO    ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'docker']
ollama-1       | time=2024-07-23T23:20:33.589Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60102 cpu]"
ollama-1       | time=2024-07-23T23:20:33.589Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
ollama-1       | time=2024-07-23T23:20:33.589Z level=WARN source=gpu.go:225 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions"
ollama-1       | time=2024-07-23T23:20:33.590Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="31.1 GiB" available="28.1 GiB"
private-gpt-1  | There was a problem when trying to write in your cache folder (/nonexistent/.cache/huggingface/hub). You should set the environment variable TRANSFORMERS_CACHE to a writable directory.
private-gpt-1  | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
private-gpt-1  | 23:20:40.419 [INFO    ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama
private-gpt-1  | Traceback (most recent call last):
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
private-gpt-1  |     return self._context[key]
private-gpt-1  |            ~~~~~~~~~~~~~^^^^^
private-gpt-1  | KeyError: <class 'private_gpt.ui.ui.PrivateGptUi'>
private-gpt-1  |
private-gpt-1  | During handling of the above exception, another exception occurred:
private-gpt-1  |
private-gpt-1  | Traceback (most recent call last):
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
private-gpt-1  |     return self._context[key]
private-gpt-1  |            ~~~~~~~~~~~~~^^^^^
private-gpt-1  | KeyError: <class 'private_gpt.server.ingest.ingest_service.IngestService'>
private-gpt-1  |
private-gpt-1  | During handling of the above exception, another exception occurred:
private-gpt-1  |
private-gpt-1  | Traceback (most recent call last):
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
private-gpt-1  |     return self._context[key]
private-gpt-1  |            ~~~~~~~~~~~~~^^^^^
private-gpt-1  | KeyError: <class 'private_gpt.components.vector_store.vector_store_component.VectorStoreComponent'>
private-gpt-1  |
private-gpt-1  | During handling of the above exception, another exception occurred:
private-gpt-1  |
private-gpt-1  | Traceback (most recent call last):
private-gpt-1  |   File "<frozen runpy>", line 198, in _run_module_as_main
private-gpt-1  |   File "<frozen runpy>", line 88, in _run_code
private-gpt-1  |   File "/home/worker/app/private_gpt/__main__.py", line 5, in <module>
private-gpt-1  |     from private_gpt.main import app
private-gpt-1  |   File "/home/worker/app/private_gpt/main.py", line 6, in <module>
private-gpt-1  |     app = create_app(global_injector)
private-gpt-1  |           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/private_gpt/launcher.py", line 63, in create_app
private-gpt-1  |     ui = root_injector.get(PrivateGptUi)
private-gpt-1  |          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
private-gpt-1  |     provider_instance = scope_instance.get(interface, binding.provider)
private-gpt-1  |                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
private-gpt-1  |     instance = self._get_instance(key, provider, self.injector)
private-gpt-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
private-gpt-1  |     return provider.get(injector)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
private-gpt-1  |     return injector.create_object(self._cls)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
private-gpt-1  |     self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1031, in call_with_injection
private-gpt-1  |     dependencies = self.args_to_inject(
private-gpt-1  |                    ^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1079, in args_to_inject
private-gpt-1  |     instance: Any = self.get(interface)
private-gpt-1  |                     ^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
private-gpt-1  |     provider_instance = scope_instance.get(interface, binding.provider)
private-gpt-1  |                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
private-gpt-1  |     instance = self._get_instance(key, provider, self.injector)
private-gpt-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
private-gpt-1  |     return provider.get(injector)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
private-gpt-1  |     return injector.create_object(self._cls)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
private-gpt-1  |     self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1031, in call_with_injection
private-gpt-1  |     dependencies = self.args_to_inject(
private-gpt-1  |                    ^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1079, in args_to_inject
private-gpt-1  |     instance: Any = self.get(interface)
private-gpt-1  |                     ^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
private-gpt-1  |     provider_instance = scope_instance.get(interface, binding.provider)
private-gpt-1  |                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
private-gpt-1  |     instance = self._get_instance(key, provider, self.injector)
private-gpt-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
private-gpt-1  |     return provider.get(injector)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
private-gpt-1  |     return injector.create_object(self._cls)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
private-gpt-1  |     self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1040, in call_with_injection
private-gpt-1  |     return callable(*full_args, **dependencies)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/private_gpt/components/vector_store/vector_store_component.py", line 114, in __init__
private-gpt-1  |     client = QdrantClient(
private-gpt-1  |              ^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/qdrant_client.py", line 117, in __init__
private-gpt-1  |     self._client = QdrantLocal(
private-gpt-1  |                    ^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/local/qdrant_local.py", line 66, in __init__
private-gpt-1  |     self._load()
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/local/qdrant_local.py", line 97, in _load
private-gpt-1  |     os.makedirs(self.location, exist_ok=True)
private-gpt-1  |   File "<frozen os>", line 215, in makedirs
private-gpt-1  |   File "<frozen os>", line 225, in makedirs
private-gpt-1  | PermissionError: [Errno 13] Permission denied: 'local_data/private_gpt'
^CGracefully stopping... (press Ctrl+C again to force)
[+] Stopping 2/2
 ✔ Container private-gpt-private-gpt-1  Stopped                                                                                                                                   0.3s
 ✔ Container private-gpt-ollama-1       Stopped  
```
### Steps to Reproduce
1. Clone the repo
2. docker-compose build
3. docker-compose up
### Expected Behavior
It should just run
### Actual Behavior
Error, as reported above
### Environment
Running on an Asustor router, docker 25.0.5
### Additional Information
_No response_
### Version
latest
### Setup Checklist
- [X] Confirm that you have followed the installation instructions in the project’s documentation.
- [X] Check that you are using the latest version of the project.
- [X] Verify disk space availability for model storage and data processing.
- [X] Ensure that you have the necessary permissions to run the project.
### NVIDIA GPU Setup Checklist
- [ ] Check that the all CUDA dependencies are installed and are compatible with your GPU (refer to [CUDA's documentation](https://docs.nvidia.com/deploy/cuda-compatibility/#frequently-asked-questions))
- [ ] Ensure an NVIDIA GPU is installed and recognized by the system (run `nvidia-smi` to verify).
- [ ] Ensure proper permissions are set for accessing GPU resources.
- [ ] Docker users - Verify that the NVIDIA Container Toolkit is configured correctly (e.g. run `sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi`) | null | 
	https://github.com/zylon-ai/private-gpt/pull/2059 | null | 
	{'base_commit': '89477ea9d3a83181b0222b732a81c71db9edf142', 'files': [{'path': 'Dockerfile.llamacpp-cpu', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 23, 30]}}}, {'path': 'Dockerfile.ollama', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 13, 20]}}}, {'path': 'docker-compose.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 29, 34], 'mod': [15, 47, 60]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "1",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [],
  "doc": [
    "docker-compose.yaml"
  ],
  "test": [],
  "config": [
    "Dockerfile.ollama",
    "Dockerfile.llamacpp-cpu"
  ],
  "asset": []
} | 1 | 
| 
	scikit-learn | 
	scikit-learn | 
	e04b8e70e60df88751af5cd667cafb66dc32b397 | 
	https://github.com/scikit-learn/scikit-learn/issues/26590 | 
	Bug | 
	KNNImputer add_indicator fails to persist where missing data had been present in training | 
	### Describe the bug
Hello, I've encountered an issue where the KNNImputer fails to record the fields where there were missing data at the time when `.fit` is called, but not recognised if `.transform` is called on a dense matrix. I would have expected it to return a 2x3 matrix rather than 2x2, with `missingindicator_A = False` for all cases.
Reproduction steps below. Any help much appreciated :)
### Steps/Code to Reproduce
```python
>>> import pandas as pd
>>> from sklearn.impute import KNNImputer
>>> knn = KNNImputer(add_indicator=True)
>>> df = pd.DataFrame({'A': [0, None], 'B': [1, 2]})
>>> df
     A  B
0  0.0  1
1  NaN  2
>>> knn.fit(df)
KNNImputer(add_indicator=True)
>>> pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())
     A    B  missingindicator_A
0  0.0  1.0                 0.0
1  0.0  2.0                 1.0
>>> df['A'] = 0
>>> pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())
```
### Expected Results
```
     A    B  missingindicator_A
0  0.0  1.0                 0.0
1  0.0  2.0                 0.0
```
### Actual Results
```pytb
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[30], line 1
----> 1 pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())
File /opt/conda/lib/python3.10/site-packages/pandas/core/frame.py:694, in DataFrame.__init__(self, data, index, columns, dtype, copy)
    684         mgr = dict_to_mgr(
    685             # error: Item "ndarray" of "Union[ndarray, Series, Index]" has no
    686             # attribute "name"
   (...)
    691             typ=manager,
    692         )
    693     else:
--> 694         mgr = ndarray_to_mgr(
    695             data,
    696             index,
    697             columns,
    698             dtype=dtype,
    699             copy=copy,
    700             typ=manager,
    701         )
    703 # For data is list-like, or Iterable (will consume into list)
    704 elif is_list_like(data):
File /opt/conda/lib/python3.10/site-packages/pandas/core/internals/construction.py:351, in ndarray_to_mgr(values, index, columns, dtype, copy, typ)
    346 # _prep_ndarray ensures that values.ndim == 2 at this point
    347 index, columns = _get_axes(
    348     values.shape[0], values.shape[1], index=index, columns=columns
    349 )
--> 351 _check_values_indices_shape_match(values, index, columns)
    353 if typ == "array":
    355     if issubclass(values.dtype.type, str):
File /opt/conda/lib/python3.10/site-packages/pandas/core/internals/construction.py:422, in _check_values_indices_shape_match(values, index, columns)
    420 passed = values.shape
    421 implied = (len(index), len(columns))
--> 422 raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
ValueError: Shape of passed values is (2, 2), indices imply (2, 3)
```
### Versions
```shell
python3, sklearn = 1.2.1
```
 | null | 
	https://github.com/scikit-learn/scikit-learn/pull/26600 | null | 
	{'base_commit': 'e04b8e70e60df88751af5cd667cafb66dc32b397', 'files': [{'path': 'doc/whats_new/v1.3.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'sklearn/impute/_knn.py', 'status': 'modified', 'Loc': {"('KNNImputer', 'transform', 242)": {'mod': [285]}}}, {'path': 'sklearn/impute/tests/test_common.py', 'status': 'modified', 'Loc': {"(None, 'test_keep_empty_features', 171)": {'add': [183]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "sklearn/impute/_knn.py"
  ],
  "doc": [
    "doc/whats_new/v1.3.rst"
  ],
  "test": [
    "sklearn/impute/tests/test_common.py"
  ],
  "config": [],
  "asset": []
} | 1 | 
| 
	nvbn | 
	thefuck | 
	9660ec7813a0e77ec3411682b0084d07b540084e | 
	https://github.com/nvbn/thefuck/issues/543 | 
	Adding sudo works for `aura -Sy` but not `aura -Ay` | 
	`fuck` is unable to add `sudo` to an `aura -Ay` command:
```
$ aura -Ay foobar-beta-git  # from AUR
aura >>= You have to use `sudo` for that.
$ fuck
No fucks given
```
But works as expected for `aura -Sy`:
```
$ aura -Sy foobar  # pacman alias
error: you cannot perform this operation unless you are root.
aura >>= Please check your input.
$ fuck
sudo aura -Sy foobar [enter/↑/↓/ctrl+c]
```
It's slightly annoying anyway that the `aura` outut is different in these cases, but is it possible for `thefuck` to work-around? Or is the only way for `aura` to give a stderr message containing "root"?
 | null | 
	https://github.com/nvbn/thefuck/pull/557 | null | 
	{'base_commit': '9660ec7813a0e77ec3411682b0084d07b540084e', 'files': [{'path': 'thefuck/rules/sudo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "2",
  "iss_reason": "1",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "thefuck/rules/sudo.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | |
| 
	scikit-learn | 
	scikit-learn | 
	2707099b23a0a8580731553629566c1182d26f48 | 
	https://github.com/scikit-learn/scikit-learn/issues/29294 | 
	Moderate
help wanted | 
	ConvergenceWarnings cannot be turned off | 
	Hi, I'm unable to turn off convergence warnings from `GraphicalLassoCV`.
I've tried most of the solutions from, and none of them worked (see below for actual implementations):
https://stackoverflow.com/questions/879173/how-to-ignore-deprecation-warnings-in-python
https://stackoverflow.com/questions/32612180/eliminating-warnings-from-scikit-learn/33812427#33812427
https://stackoverflow.com/questions/53968004/how-to-silence-all-sklearn-warning
https://stackoverflow.com/questions/14463277/how-to-disable-python-warnings
Contrary to what the designers of the sklearn's exceptions must have thought when it was implemented, some of us actually use stdout to log important information of the host program for diagnostics purposes.  Flooding it with garbage that cannot be turned off, as is in the case with cross-validation, is not ok. 
To briefly speak to the severity of the issue, the above sklearn-specific questions relating to suppressing warnings have been viewed ~500K times with combined ~400 upvotes, and dates back 7 years. 
I've tried the following (`n_jobs` parameter does not appear to affect the result):
```py
from sklearn.covariance import GraphicalLassoCV
import warnings
warnings.filterwarnings("ignore", category=ConvergenceWarning)
model = GraphicalLassoCV(n_jobs=4)
model = model.fit(data)
```
```py
from sklearn.covariance import GraphicalLassoCV
import warnings
warnings.filterwarnings(action='ignore')
model = GraphicalLassoCV(n_jobs=4)
model = model.fit(data)
```
```py
import warnings
with warnings.catch_warnings():
    warnings.simplefilter("ignore", ConvergenceWarning)
    model = GraphicalLassoCV(n_jobs=4)
    model = model.fit(data)
```
```py
from sklearn.covariance import GraphicalLassoCV
def warn(*args, **kwargs):
    pass
import warnings
warnings.warn = warn
model = GraphicalLassoCV(n_jobs=4)
model = model.fit(data)
```
```py
import contextlib
import os, sys
@contextlib.contextmanager
def suppress_stdout():
    with open(os.devnull, 'w') as fnull:
        old_stdout = sys.stdout
        sys.stdout = fnull
        try:
            yield
        finally:
            sys.stdout = old_stdout
with suppress_stdout():
    model = GraphicalLassoCV(n_jobs=4)
    model = model.fit(data)
```
```py
import logging
logging.captureWarnings(True)
logging.getLogger("py.warnings").setLevel(logging.ERROR)
model = GraphicalLassoCV(n_jobs=4)
model = model.fit(data)
``` | null | 
	https://github.com/scikit-learn/scikit-learn/pull/30380 | null | 
	{'base_commit': '2707099b23a0a8580731553629566c1182d26f48', 'files': [{'path': 'sklearn/utils/parallel.py', 'status': 'modified', 'Loc': {"('_FuncWrapper', 'with_config', 121)": {'add': [122]}, "(None, '_with_config', 24)": {'mod': [24, 26, 27]}, "('Parallel', '__call__', 54)": {'mod': [73, 74, 77]}, "('_FuncWrapper', None, 114)": {'mod': [121]}, "('_FuncWrapper', '__call__', 125)": {'mod': [126, 127, 137, 138]}}}, {'path': 'sklearn/utils/tests/test_parallel.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 11]}, "(None, 'test_dispatch_config_parallel', 56)": {'add': [100]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "sklearn/utils/parallel.py"
  ],
  "doc": [],
  "test": [
    "sklearn/utils/tests/test_parallel.py"
  ],
  "config": [],
  "asset": []
} | 1 | 
| 
	All-Hands-AI | 
	OpenHands | 
	7b2b1eff57e41364b4b427e36e766607e7eed3a0 | 
	https://github.com/All-Hands-AI/OpenHands/issues/20 | 
	Control Loop: long term planning and execution | 
	The biggest, most complicated aspect of Devin is long-term planning and execution. I'd like to start a discussion about how this might work in OpenDevin.
There's some [recent prior work from Microsoft](https://arxiv.org/pdf/2403.08299.pdf) with some impressive results. I'll summarize here, with some commentary.
## Overall Flow
* User specifies objective and associated settings
* Conversation Manager kicks in
* Sends convo to Agent Scheduler
* Agents execute commands
* Output is placed back into the conversation
* Rinse and repeat
## Configuraiton 
* A YAML file defines a set of actions/commands the bot can take (e.g. `npm test`)
  * comment: why not just leave it open-ended?
* You can have different agents with different capabilities, e.g. a "dev agent" and a "reviewer agent", who work collaboratively
  * comment: this sounds like MetaGPT
 
## Components
### Conversation Manager
* maintains message history and command outputs
* decides when to interrupt the conversation
  * comment: for what? more info from the user?
* decides when the conversation is over, i.e. task has been completed
  * agent can send a "stop" command, max tokens can be reached, problems w/ execution environment
### Parser
* interprets agent output and turns it into commands, file edits, etc
* in case of parsing failure, a message is sent back to the agent to rewrite its command
### Output Organizer
* Takes command output and selectively places it into the conversation history
  * sometimes summarizes the content first
  * comment: why not just drop everything back into the conversation history (maybe truncating really long CLI output) 
### Agent Scheduler
* orchestrates different agents
* uses different algos for deciding who gets to go next
  * round-robin: everyone takes turns in order
  * token-based: agent gets to keep going until it says it's done
  * priority-based: agents go based on (user defined?) priority
### Tools Library
 * file editing (can edit entire file, or specify start line and end line)
 * retrieval (file contents, `ls`, `grep`). Seems to use vector search as well
 * build and execution: abstracts away the implementation in favor of simple commands like `build foo`
 * testing and validation: includes linters and bug-finding utils
 * git: can commit, push, merge
 * communication: can as human for input/feedback, can talk to other agents
### Evaluation Environment
* runs in Docker
 | null | 
	https://github.com/All-Hands-AI/OpenHands/pull/3771 | null | 
	{'base_commit': '7b2b1eff57e41364b4b427e36e766607e7eed3a0', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [230]}}}, {'path': 'containers/runtime/README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 3, 5, 9]}}}, {'path': 'frontend/src/components/AgentStatusBar.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [20, 92], 'mod': [94, 95, 96, 97, 98, 99, 100]}}}, {'path': 'frontend/src/i18n/translation.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [465, 482], 'mod': [75, 81, 87, 339, 344, 389, 392, 393, 397, 402, 407, 412, 417, 422, 427, 432, 437, 442, 447, 452, 457, 462, 467, 472, 478, 490, 496, 499, 502, 505, 508, 511, 514, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 536, 541, 546, 551, 556, 561, 566, 571, 576, 581, 586, 605, 610, 615, 620, 638, 643, 648, 653, 658, 690, 736, 741, 746, 751, 757, 763, 769, 775, 781, 786, 791, 794, 799, 805, 811, 816, 817, 822, 823]}}}, {'path': 'frontend/src/services/actions.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 140], 'mod': [12]}, "(None, 'handleAssistantMessage', 141)": {'add': [153], 'mod': [152]}}}, {'path': 'frontend/src/services/session.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, "('Session', None, 11)": {'add': [15, 147, 148]}, "('Session', '_setupSocket', 76)": {'add': [85, 117], 'mod': [97]}, "('Session', 'send', 148)": {'mod': [150]}}}, {'path': 'frontend/src/store.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 21]}}}, {'path': 'frontend/src/types/Message.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33]}}}, {'path': 'frontend/src/types/ResponseType.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 3]}}}, {'path': 'openhands/core/main.py', 'status': 'modified', 'Loc': {"(None, 'create_runtime', 50)": {'mod': [58]}}}, {'path': 'openhands/runtime/client/client.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18, 20, 564]}}}, {'path': 'openhands/runtime/client/runtime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "('EventStreamRuntime', '__init__', 115)": {'add': [121, 132, 133, 159, 181], 'mod': [149, 172, 174]}, "('EventStreamRuntime', '_init_container', 197)": {'add': [283], 'mod': [204, 205, 206, 244, 248, 254]}, "('EventStreamRuntime', '_find_available_port', 534)": {'add': [541]}}}, {'path': 'openhands/runtime/e2b/runtime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "('E2BRuntime', '__init__', 21)": {'add': [27], 'mod': [29]}}}, {'path': 'openhands/runtime/remote/runtime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "('RemoteRuntime', '__init__', 51)": {'add': [57], 'mod': [171]}}}, {'path': 'openhands/runtime/runtime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "('Runtime', '__init__', 54)": {'add': [60, 65]}}}, {'path': 'openhands/server/session/agent.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [0]}, "('AgentSession', 'start', 40)": {'add': [48], 'mod': [67]}, "('AgentSession', '_create_security_analyzer', 92)": {'add': [100], 'mod': [99]}, "('AgentSession', '_create_runtime', 105)": {'add': [123], 'mod': [115, 119]}, "('AgentSession', None, 13)": {'add': [125], 'mod': [105]}, "('AgentSession', '_create_controller', 126)": {'mod': [181]}}}, {'path': 'openhands/server/session/manager.py', 'status': 'modified', 'Loc': {"('SessionManager', 'send', 36)": {'mod': [38, 40]}}}, {'path': 'openhands/server/session/session.py', 'status': 'modified', 'Loc': {"('Session', None, 30)": {'add': [35]}, "('Session', '__init__', 37)": {'add': [47]}, "('Session', '_initialize_agent', 71)": {'add': [115]}, "('Session', 'send', 167)": {'add': [174]}, "('Session', 'load_from_data', 192)": {'add': [197]}, '(None, None, None)': {'mod': [24]}, "('Session', 'on_event', 127)": {'mod': [128, 138]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "openhands/runtime/e2b/runtime.py",
    "frontend/src/types/Message.tsx",
    "frontend/src/types/ResponseType.tsx",
    "frontend/src/store.ts",
    "openhands/runtime/remote/runtime.py",
    "openhands/runtime/runtime.py",
    "frontend/src/services/session.ts",
    "openhands/server/session/agent.py",
    "openhands/core/main.py",
    "frontend/src/i18n/translation.json",
    "openhands/server/session/session.py",
    "openhands/runtime/client/client.py",
    "frontend/src/components/AgentStatusBar.tsx",
    "openhands/runtime/client/runtime.py",
    "frontend/src/services/actions.ts",
    "openhands/server/session/manager.py"
  ],
  "doc": [
    "containers/runtime/README.md"
  ],
  "test": [],
  "config": [
    ".gitignore"
  ],
  "asset": []
} | 1 | |
| 
	All-Hands-AI | 
	OpenHands | 
	2242702cf94eab7275f2cb148859135018d9b280 | 
	https://github.com/All-Hands-AI/OpenHands/issues/1251 | 
	enhancement | 
	Sandbox Capabilities Framework | 
	**Summary**
We have an existing use case for a Jupyter-aware agent, which always runs in a sandbox where Jupyter is available. There are some other scenarios I can think of where an agent might want some guarantees about what it can do with the sandbox:
* We might want a "postgres migration writer", which needs access to a postgres instance
* We might have a "cypress test creator" agent, which would need access to cypress
* Further down the road, we might want to have an [Open Interpreter](https://github.com/OpenInterpreter/open-interpreter) agent, which needs access to osascript
* etc etc
This proposal would allow agents to guarantee that certain programs are available in the sandbox, or that certain services are running in a predictable way.
What if we did something like this:
**Motivation**
We want agents to be able to have certain guarantees about the sandbox environment. But we also want our sandbox interface to be generic--something like "you have a bash terminal".
The latter is especially important, because we want users to be able to bring their own sandbox images. E.g. you might use an off-the-shelf haskell image if your project uses haskell--otherwise you'd need to go through the install process every time you start OpenDevin, or maintain a fork of the sandbox.
**Technical Design**
* For every requirement we support (e.g. jupyter, postgres, cypress), we have a bash script that
  * checks if it's installed
  * if not, installs it
  * maybe starts something in the background
* Let agents specify a list of requirements
  * e.g. CodeActAgent could say requirements: ['jupyter']
* When we start the Agent+Sandbox pair, we run the necessary bash scripts
  * should be pretty quick if the requirement is already built into the image
* Then the agent has some guarantees about the requirement being met, and how it's running
   * e.g. we can put in the prompt "there's a postgres server running on port 5432, user foo, password bar"
* If there are specific ways of interacting with that env (e.g. for jupyter, it seems we have to write to a websocket that's open in the sandbox?) the agent can implement custom Actions, like run_in_jupyter
**Alternatives to Consider**
* Building a bunch of stuff into one big sandbox
* Building special sandboxes that are required by certain agents (e.g. a JupyterSandbox)
**Additional context**
https://opendevin.slack.com/archives/C06QKSD9UBA/p1713552591042089
 | null | 
	https://github.com/All-Hands-AI/OpenHands/pull/1255 | null | 
	{'base_commit': '2242702cf94eab7275f2cb148859135018d9b280', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [220]}}}, {'path': 'agenthub/codeact_agent/codeact_agent.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [17]}, "('CodeActAgent', None, 66)": {'add': [71]}}}, {'path': 'opendevin/agent.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8]}, "('Agent', None, 11)": {'add': [19]}}}, {'path': 'opendevin/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 20]}, "(None, 'get', 140)": {'add': [147]}}}, {'path': 'opendevin/controller/action_manager.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16]}, "('ActionManager', None, 19)": {'add': [43]}}}, {'path': 'opendevin/controller/agent_controller.py', 'status': 'modified', 'Loc': {"('AgentController', '__init__', 41)": {'add': [55]}}}, {'path': 'opendevin/sandbox/docker/exec_box.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6]}, "('DockerExecBox', None, 36)": {'add': [124]}}}, {'path': 'opendevin/sandbox/docker/local_box.py', 'status': 'modified', 'Loc': {"('LocalBox', None, 25)": {'add': [41]}}}, {'path': 'opendevin/sandbox/docker/ssh_box.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6, 17, 357], 'mod': [359]}, "('DockerSSHBox', 'setup_user', 95)": {'add': [139]}, "('DockerSSHBox', None, 46)": {'add': [210]}, "('DockerSSHBox', 'restart_docker_container', 271)": {'add': [309]}}}, {'path': 'opendevin/sandbox/e2b/sandbox.py', 'status': 'modified', 'Loc': {"('E2BBox', None, 14)": {'add': [63]}}}, {'path': 'opendevin/sandbox/sandbox.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "('Sandbox', 'close', 28)": {'add': [29]}, "('Sandbox', None, 8)": {'mod': [8]}}}, {'path': 'opendevin/schema/config.py', 'status': 'modified', 'Loc': {"('ConfigType', None, 4)": {'add': [10]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "opendevin/sandbox/docker/ssh_box.py",
    "opendevin/schema/config.py",
    "agenthub/codeact_agent/codeact_agent.py",
    "opendevin/controller/action_manager.py",
    "opendevin/sandbox/docker/local_box.py",
    "opendevin/sandbox/e2b/sandbox.py",
    "opendevin/sandbox/sandbox.py",
    "opendevin/sandbox/docker/exec_box.py",
    "opendevin/agent.py",
    "opendevin/config.py",
    "opendevin/controller/agent_controller.py"
  ],
  "doc": [],
  "test": [],
  "config": [
    "Makefile"
  ],
  "asset": []
} | 1 | 
| 
	deepfakes | 
	faceswap | 
	0ea743029db0d47f09d33ef90f50ad84c20b085f | 
	https://github.com/deepfakes/faceswap/issues/263 | 
	Very slow extraction with scripts vs fakeapp 1.1 | 
	1080ti + OC'd 2600k using winpython 3.6.2 cuda 9.0 and tensorflow 1.6
**Training** utilizes ~50% of the GPU now (which is better than the ~25% utilized with FA 1.1) but extraction doesn't seem to utilize the GPU at all (getting around 1.33it/s) wheras with FA 1.1 I get around 17it/s - tried CNN and it dropped down to taking nearly a minute per file. Although I say it doesn't utilize the GPU it still seems to use all 11GB of RAM on the GPU, just none of the compute cores or processor are in use. CPU is using about 17%.
Tried using extracted data from FA 1.1 with .py -convert but it just says 'no alignment found for file: x" for every file even tho --alignments points to the path with alignments.json 
I would've thought the alignments.json from FA 1.1 was compatible so I'm not sure if the above is a separate issue or not. | null | 
	https://github.com/deepfakes/faceswap/pull/259 | null | 
	{'base_commit': '0ea743029db0d47f09d33ef90f50ad84c20b085f', 'files': [{'path': 'lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py', 'status': 'modified', 'Loc': {"(None, 'initialize', 108)": {'add': [126], 'mod': [108, 117, 123, 124, 125]}, "(None, 'extract', 137)": {'mod': [137, 138, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 183]}}}, {'path': 'lib/cli.py', 'status': 'modified', 'Loc': {"('DirectoryProcessor', 'get_faces_alignments', 140)": {'mod': [149]}, "('DirectoryProcessor', 'get_faces', 159)": {'mod': [161, 165]}}}, {'path': 'lib/faces_detect.py', 'status': 'modified', 'Loc': {"(None, 'detect_faces', 3)": {'mod': [3, 4]}}}, {'path': 'scripts/extract.py', 'status': 'modified', 'Loc': {"('ExtractTrainingData', 'add_optional_arguments', 22)": {'mod': [25]}, "('ExtractTrainingData', 'process', 79)": {'mod': [95]}, "('ExtractTrainingData', 'processFiles', 100)": {'mod': [105]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "2",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "lib/faces_detect.py",
    "lib/cli.py",
    "lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py",
    "scripts/extract.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | |
| 
	fastapi | 
	fastapi | 
	ef176c663195489b44030bfe1fb94a317762c8d5 | 
	https://github.com/fastapi/fastapi/issues/3323 | 
	feature
reviewed | 
	Support PEP 593 `Annotated` for specifying dependencies and parameters | 
	### First check
* [x] I added a very descriptive title to this issue.
* [x] I used the GitHub search to find a similar issue and didn't find it.
* [x] I searched the FastAPI documentation, with the integrated search.
* [x] I already searched in Google "How to X in FastAPI" and didn't find any information.
* [x] I already read and followed all the tutorial in the docs and didn't find an answer.
* [x] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
* [x] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
* [x] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
* [x] After submitting this, I commit to:
    * Read open issues with questions until I find 2 issues where I can help someone and add a comment to help there.
    * Or, I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
    * Implement a Pull Request for a confirmed bug.
### Example
I propse to allow transforming:
<!-- Replace the code below with your own self-contained, minimal, reproducible, example -->
```Python
from typing import Optional
from fastapi import Depends, FastAPI
app = FastAPI()
async def common_parameters(q: Optional[str] = None, skip: int = 0, limit: int = 100):
    return {"q": q, "skip": skip, "limit": limit}
@app.get("/items/")
async def read_items(commons: dict = Depends(common_parameters)):
    return commons
```
to 
```Python
from typing import Annotated, Optional
from fastapi import Depends, FastAPI
app = FastAPI()
async def common_parameters(q: Optional[str] = None, skip: int = 0, limit: int = 100):
    return {"q": q, "skip": skip, "limit": limit}
@app.get("/items/")
async def read_items(commons: Annotated[dict, Depends(common_parameters)]):
    return commons
```
### Discussion
[PEP 593](https://www.python.org/dev/peps/pep-0593/) Added `Annotated` for adding additional annotations beyond type annotations. I think FastAPI's `Depends`, `Query`, `Body` and the likes fit well with the kind of additional annotations this supports.
This would also make default values less awkward:
```python
@app.get("/items/")
async def read_items(q: Optional[str] = Query(None, max_length=50)):
    pass
```
Could become
```python
@app.get("/items/")
async def read_items(q: Annotated[Optional[str], Query(max_length=50)] = None):
    pass
```
This will also solve the issue mentioned [in the docs](https://fastapi.tiangolo.com/tutorial/path-params-numeric-validations/#order-the-parameters-as-you-need) of parameter ordering.
Finally, it is sometimes convenient to use the same function as both a FastAPI dependency and a regular function. In these cases, because `= Depends(...)` is a default parameter value, if you forget to pass a parameter the error is not caught by your IDE. Worse, it is not caught at runtime because Python will just pass along the `Depends` object. This will probably cause an error down the road, but may silently succeed in some cases.
I'm willing to implement this if you think it's a good idea. | null | 
	https://github.com/fastapi/fastapi/pull/4871 | null | 
	{'base_commit': 'ef176c663195489b44030bfe1fb94a317762c8d5', 'files': [{'path': 'fastapi/dependencies/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [58], 'mod': [51]}, "(None, 'get_dependant', 282)": {'add': [336], 'mod': [301, 303, 307, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335]}, "(None, 'get_param_sub_dependant', 114)": {'mod': [115, 117, 118, 119, 120, 121, 124, 126]}, "(None, 'add_non_field_param_to_dependency', 340)": {'mod': [341, 343, 344, 346, 347, 349, 350, 352, 353, 355, 356, 358, 359]}, "(None, 'get_param_field', 364)": {'mod': [364, 366, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 416]}}}, {'path': 'fastapi/param_functions.py', 'status': 'modified', 'Loc': {"(None, 'Path', 7)": {'mod': [8]}}}, {'path': 'fastapi/params.py', 'status': 'modified', 'Loc': {"('Path', '__init__', 63)": {'add': [82], 'mod': [65, 85]}, "('Form', '__init__', 280)": {'mod': [282]}, "('File', '__init__', 320)": {'mod': [322]}}}, {'path': 'fastapi/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}, "(None, 'create_response_field', 60)": {'mod': [76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 88]}}}, {'path': 'tests/main.py', 'status': 'modified', 'Loc': {"(None, 'get_path_param_id', 52)": {'mod': [52, 53, 56, 57]}}}, {'path': 'tests/test_application.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257]}}}, {'path': 'tests/test_params_repr.py', 'status': 'modified', 'Loc': {"(None, 'test_path_repr', 22)": {'mod': [22, 23]}}}, {'path': 'tests/test_path.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [196]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "fastapi/dependencies/utils.py",
    "fastapi/utils.py",
    "fastapi/param_functions.py",
    "tests/main.py",
    "fastapi/params.py"
  ],
  "doc": [],
  "test": [
    "tests/test_params_repr.py",
    "tests/test_application.py",
    "tests/test_path.py"
  ],
  "config": [],
  "asset": []
} | 1 | 
| 
	python | 
	cpython | 
	e01eeb7b4b8d00b9f5c6acb48957f46ac4e252c0 | 
	https://github.com/python/cpython/issues/92417 | 
	docs | 
	Many references to unsupported Python versions in the stdlib docs | 
	**Documentation**
There are currently many places in the stdlib docs where there are needless comments about how to maintain compatibility with Python versions that are now end-of-life. Many of these can now be removed, to improve brevity and clarity in the documentation.
I plan to submit a number of PRs to fix these.
PRs:
 
- #92418 
- #92419 
- #92420 
- #92421 
- #92422 
- #92423 
- #92424 
- #92425
- https://github.com/python/cpython/pull/92502
- #92538
- #92539
- #92543
- #92544
- [More to come]
Backports:
- https://github.com/python/cpython/pull/92459
- https://github.com/python/cpython/pull/92460
- https://github.com/python/cpython/pull/92461
- https://github.com/python/cpython/pull/92462
- https://github.com/python/cpython/pull/92463
- https://github.com/python/cpython/pull/92491
- https://github.com/python/cpython/pull/92467
- https://github.com/python/cpython/pull/92468
- https://github.com/python/cpython/pull/92492
- https://github.com/python/cpython/pull/92464
- https://github.com/python/cpython/pull/92465
- https://github.com/python/cpython/pull/92466
- https://github.com/python/cpython/pull/92472
- https://github.com/python/cpython/pull/92473
- https://github.com/python/cpython/pull/92474
- https://github.com/python/cpython/pull/92485
- https://github.com/python/cpython/pull/92486
- https://github.com/python/cpython/pull/92487
- https://github.com/python/cpython/pull/92606
- https://github.com/python/cpython/pull/92607 | null | 
	https://github.com/python/cpython/pull/92539 | null | 
	{'base_commit': 'e01eeb7b4b8d00b9f5c6acb48957f46ac4e252c0', 'files': [{'path': 'Doc/library/unittest.mock-examples.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [663]}}}, {'path': 'Doc/library/unittest.mock.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2384]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "2",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": " ",
  "info_type": ""
} | 
	{
  "code": [],
  "doc": [
    "Doc/library/unittest.mock-examples.rst",
    "Doc/library/unittest.mock.rst"
  ],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	scikit-learn | 
	scikit-learn | 
	23d8761615d0417eef5f52cc796518e44d41ca2a | 
	https://github.com/scikit-learn/scikit-learn/issues/19248 | 
	Documentation
module:cluster | 
	Birch should be called BIRCH | 
	C.f. the original paper.
Zhang, T.; Ramakrishnan, R.; Livny, M. (1996). "BIRCH: an efficient data clustering method for very large databases". Proceedings of the 1996 ACM SIGMOD international conference on Management of data - SIGMOD '96. pp. 103–114. doi:10.1145/233269.233324 | null | 
	https://github.com/scikit-learn/scikit-learn/pull/19368 | null | 
	{'base_commit': '23d8761615d0417eef5f52cc796518e44d41ca2a', 'files': [{'path': 'doc/modules/clustering.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [106, 946, 965, 999, 1001, 1005]}}}, {'path': 'examples/cluster/plot_birch_vs_minibatchkmeans.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6, 39, 48, 58, 78]}}}, {'path': 'examples/cluster/plot_cluster_comparison.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [146]}}}, {'path': 'sklearn/cluster/_birch.py', 'status': 'modified', 'Loc': {"('Birch', None, 335)": {'mod': [336]}, "('Birch', '_global_clustering', 648)": {'mod': [677]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "2",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "examples/cluster/plot_birch_vs_minibatchkmeans.py",
    "sklearn/cluster/_birch.py",
    "examples/cluster/plot_cluster_comparison.py"
  ],
  "doc": [
    "doc/modules/clustering.rst"
  ],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	localstack | 
	localstack | 
	65b807e4e95fe6da3e30f13e4271dc9dcfaa334e | 
	https://github.com/localstack/localstack/issues/402 | 
	type: bug | 
	Dynamodbstreams Use Kinesis Shard Identifiers | 
	<!-- Love localstack? Please consider supporting our collective:
👉  https://opencollective.com/localstack/donate -->
Dynamodbstreams seem to be making use of Kinesis shard identifiers which are considered invalid by botocore request validators.
Error response from boto3 when attempting to `get_shard_iterator` from shard ids returned from `describe_stream`:
```
[test-integration:L51:27s] exception = ParamValidationError(u'Parameter validation failed:\nInvalid length for parameter ShardId, value: 20, valid range: 28-inf',)
[test-integration:L52:27s]
[test-integration:L53:27s]     def _reraise_exception(self, exception):
[test-integration:L54:27s]         if hasattr(exception, 'response'):
[test-integration:L55:27s]             code = exception.response['Error']['Code']
[test-integration:L56:27s]
[test-integration:L57:27s]             if code == 'TrimmedDataAccessException':
[test-integration:L58:27s]                 raise TrimmedRecordsException()
[test-integration:L59:27s]             elif code == 'ResourceNotFoundException':
[test-integration:L60:27s]                 raise ResourceDNEException()
[test-integration:L61:27s]
[test-integration:L62:27s] >       raise exception
[test-integration:L63:27s] E       ParamValidationError: Parameter validation failed:
[test-integration:L64:27s] E       Invalid length for parameter ShardId, value: 20, valid range: 28-inf
[test-integration:L65:27s]
[test-integration:L66:27s] .tox/py27/lib/python2.7/site-packages/pyrokinesis/dynamodbstreams_ingress_backend.py:111: ParamValidationError
```
The following is the response object I am getting back when I `describe_stream` on the stream's ARN:
```
[test-integration:L68:27s] {'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HTTPHeaders': {'content-length': '692', 'access-control-allow-origin': '*', 'date': 'Fri, 13 Oct 2017 12:47:00 GMT', 'server': 'Werkzeug/0.12.2 Python/2.7.13', 'content-type': 'application/json'}}, u'StreamDescription': {u'StreamLabel': u'TODO', u'StreamArn': u'arn:aws:dynamodb:us-east-1:000000000000:table/DynamoTest/stream/2017-10-13T12:47:00', u'Shards': [{u'ShardId': u'shardId-000000000000', u'SequenceNumberRange': {u'StartingSequenceNumber': u'49577893583130519883135457518096755974321873497073123330'}}], u'KeySchema': [{u'KeyType': u'HASH', u'AttributeName': u'ID'}], u'TableName': u'DynamoTest', u'StreamStatus': u'ENABLED'}}
```
My localstack setup:
```
localstack 0.7.3
[localstack:L2:1s] 2017-10-13 15:10:35,915 INFO spawned: 'dashboard' with pid 13
[localstack:L3:1s] 2017-10-13 15:10:35,917 INFO spawned: 'infra' with pid 14
[localstack:L4:1s] (. .venv/bin/activate; bin/localstack web --port=8080)
[localstack:L5:1s] (. .venv/bin/activate; exec bin/localstack start)
[localstack:L6:1s] Starting local dev environment. CTRL-C to quit.
[localstack:L7:1s]  * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
[localstack:L8:1s]  * Restarting with stat
[localstack:L9:1s] Starting mock Kinesis (http port 4568)...
[localstack:L10:1s] Starting mock S3 (http port 4572)...
[localstack:L11:1s] Starting mock DynamoDB (http port 4569)...
[localstack:L12:1s]  * Debugger is active!
[localstack:L13:2s]  * Debugger PIN: 281-540-735
[localstack:L14:2s] 2017-10-13 15:10:37,123 INFO success: dashboard entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[localstack:L15:2s] 2017-10-13 15:10:37,123 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[localstack:L16:2s] Starting mock DynamoDB Streams service (http port 4570)...
[localstack:L17:2s] Listening at http://:::4565
[localstack:L18:2s] Initializing DynamoDB Local with the following configuration:
[localstack:L19:2s] Port:	4564
[localstack:L20:2s] InMemory:	false
[localstack:L21:2s] DbPath:	/tmp/localstack/dynamodb
[localstack:L22:2s] SharedDb:	true
[localstack:L23:2s] shouldDelayTransientStatuses:	false
[localstack:L24:2s] CorsParams:	*
[localstack:L25:2s]
[localstack:L26:2s] * Running on http://0.0.0.0:4563/ (Press CTRL+C to quit)
```
 | null | 
	https://github.com/localstack/localstack/pull/403 | null | 
	{'base_commit': '65b807e4e95fe6da3e30f13e4271dc9dcfaa334e', 'files': [{'path': 'localstack/services/dynamodbstreams/dynamodbstreams_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 119]}, "(None, 'post_request', 47)": {'add': [76], 'mod': [70, 78]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "1",
  "iss_reason": "1",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "localstack/services/dynamodbstreams/dynamodbstreams_api.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	pallets | 
	flask | 
	ee76129812419d473eb62434051e81d5855255b6 | 
	https://github.com/pallets/flask/issues/602 | 
	Misspelling in docs @ flask.Flask.handle_exception | 
	`Default exception handling that kicks in when an exception occours that is not caught. In debug mode the exception will be re-raised immediately, otherwise it is logged and the handler for a 500 internal server error is used. If no such handler exists, a default 500 internal server error message is displayed.`
Occours should be occurs.
I looked around in the project code to see if i could update this, but it looks like the docs subdir is no longer used? I could be wrong, if you let me know where this is at I'll update it and send a PR :)
 | null | 
	https://github.com/pallets/flask/pull/603 | null | 
	{'base_commit': 'ee76129812419d473eb62434051e81d5855255b6', 'files': [{'path': 'flask/app.py', 'status': 'modified', 'Loc': {"('Flask', 'handle_exception', 1266)": {'mod': [1268]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "3",
  "iss_reason": "1",
  "loc_way": "pr",
  "loc_scope": "不太确定问题类别,因为是开发者询问typo error",
  "info_type": ""
} | 
	{
  "code": [
    "flask/app.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | |
| 
	ansible | 
	ansible | 
	79d00adc52a091d0ddd1d8a96b06adf2f67f161b | 
	https://github.com/ansible/ansible/issues/36378 | 
	cloud
aws
module
affects_2.4
support:certified
docs | 
	Documentation Error for ec2_vpc_nacl rules | 
	##### ISSUE TYPE
 - Documentation Report
##### COMPONENT NAME
ec2_vpc_nacl
##### ANSIBLE VERSION
```
ansible 2.4.3.0
  config file = None
  configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 2.7.12 (default, Dec  4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The example documentation is the wrong way round for ec2_vpc_nacl with respect to the icmp code and type.
##### STEPS TO REPRODUCE
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py#L87 has the order of the `icmp_code` and `icmp_type` inverted compared to the code that parses it https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py#L298
##### EXPECTED RESULTS
##### ACTUAL RESULTS
 | null | 
	https://github.com/ansible/ansible/pull/36380 | null | 
	{'base_commit': '79d00adc52a091d0ddd1d8a96b06adf2f67f161b', 'files': [{'path': 'lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [87]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "2",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py"
  ],
  "doc": [],
  "test": [],
  "config": [],
  "asset": []
} | 1 | 
| 
	geekan | 
	MetaGPT | 
	a32e238801d0a8f3c1bd97b98d038b40977a8cc6 | 
	https://github.com/geekan/MetaGPT/issues/1174 | 
	New provider: Amazon Bedrock (AWS) | 
	**Feature description**
Please include support for Amazon Bedrock models. These models can be from Amazon, Anthropic, AI21, Cohere, Mistral, or Meta Llama 2. 
**Your Feature**
1. Create a new LLM Provides under [metagpt/provider](https://github.com/geekan/MetaGPT/tree/db65554c4931d4a95e20331b770cf4f7e5202264/metagpt/provider) for Amazon Bedrock
2. Include it in the [LLMType](https://github.com/geekan/MetaGPT/blob/db65554c4931d4a95e20331b770cf4f7e5202264/metagpt/configs/llm_config.py#L17) available | null | 
	https://github.com/geekan/MetaGPT/pull/1231 | null | 
	{'base_commit': 'a32e238801d0a8f3c1bd97b98d038b40977a8cc6', 'files': [{'path': 'config/puppeteer-config.json', 'status': 'modified', 'Loc': {}}, {'path': 'metagpt/configs/llm_config.py', 'status': 'modified', 'Loc': {"('LLMType', None, 17)": {'add': [34]}, "('LLMConfig', None, 40)": {'add': [80], 'mod': [77]}}}, {'path': 'metagpt/provider/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19, 32]}}}, {'path': 'metagpt/utils/token_counter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [212]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [72]}}}, {'path': 'tests/metagpt/provider/mock_llm_config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [62]}}}, {'path': 'tests/metagpt/provider/req_resp_const.py', 'status': 'modified', 'Loc': {"(None, 'llm_general_chat_funcs_test', 174)": {'add': [185]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": null,
  "info_type": null
} | 
	{
  "code": [
    "metagpt/utils/token_counter.py",
    "metagpt/provider/__init__.py",
    "metagpt/configs/llm_config.py",
    "config/puppeteer-config.json",
    "tests/metagpt/provider/mock_llm_config.py",
    "tests/metagpt/provider/req_resp_const.py"
  ],
  "doc": [],
  "test": [],
  "config": [
    "requirements.txt"
  ],
  "asset": []
} | 1 | |
| 
	pandas-dev | 
	pandas | 
	862cd05df4452592a99dd1a4fa10ce8cfb3766f7 | 
	https://github.com/pandas-dev/pandas/issues/37494 | 
	Enhancement
Groupby
ExtensionArray
NA - MaskedArrays
Closing Candidate | 
	ENH: improve the resulting dtype for groupby operations on nullable dtypes | 
	Follow-up on https://github.com/pandas-dev/pandas/pull/37433, and partly related to https://github.com/pandas-dev/pandas/issues/37493
Currently, after groupby operations we try to cast back to the original dtype when possible (at least in case of extension arrays). But this is not always correct, and also not done consistently. Some examples using the test case from the mentioned PR using a nullable Int64 column as input:
```
In [1]: df = DataFrame(
   ...:     {
   ...:         "A": ["A", "B"] * 5,
   ...:         "B": pd.array([1, 2, 3, 4, 5, 6, 7, 8, 9, pd.NA], dtype="Int64"),
   ...:     }
   ...: )
In [2]: df.groupby("A")["B"].sum()
Out[2]: 
A
A    25
B    20
Name: B, dtype: Int64
In [3]: df.groupby("A")["B"].std()
Out[3]: 
A
A    3.162278
B    2.581989
Name: B, dtype: float64
In [4]: df.groupby("A")["B"].mean()
Out[4]: 
A
A    5
B    5
Name: B, dtype: Int64
In [5]: df.groupby("A")["B"].count()
Out[5]: 
A
A    5
B    4
Name: B, dtype: int64
```
So some observations:
* For `sum()`, we correctly have Int64 for the result
* For `std()`, we could use the nullable Float64 instead of float64 dtype
* For `mean()`, we incorrectly cast back to Int64 dtype, as the result of mean should always be floating (in this case the casting just happened to work because the means were rounded numbers)
* For `count()`, we did not create a nullable Int64 dtype for the result, while this could be done in the input is nullable | null | 
	https://github.com/pandas-dev/pandas/pull/38291 | null | 
	{'base_commit': '862cd05df4452592a99dd1a4fa10ce8cfb3766f7', 'files': [{'path': 'pandas/core/dtypes/cast.py', 'status': 'modified', 'Loc': {"(None, 'maybe_cast_result_dtype', 342)": {'mod': [360, 362, 363, 364, 365]}}}, {'path': 'pandas/core/groupby/ops.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [47]}, "('BaseGrouper', '_ea_wrap_cython_operation', 493)": {'mod': [524]}}}, {'path': 'pandas/tests/arrays/integer/test_arithmetic.py', 'status': 'modified', 'Loc': {"(None, 'test_reduce_to_float', 261)": {'mod': [280]}}}, {'path': 'pandas/tests/groupby/aggregate/test_cython.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}, "(None, 'test_cython_agg_nullable_int', 297)": {'add': [314]}}}, {'path': 'pandas/tests/groupby/test_function.py', 'status': 'modified', 'Loc': {"(None, 'test_apply_to_nullable_integer_returns_float', 1091)": {'mod': [1096]}}}, {'path': 'pandas/tests/resample/test_datetime_index.py', 'status': 'modified', 'Loc': {"(None, 'test_resample_integerarray', 112)": {'mod': [127]}}}]} | 
	[] | 
	[] | 
	[] | 
	{
  "iss_type": "4",
  "iss_reason": "2",
  "loc_way": "pr",
  "loc_scope": "",
  "info_type": ""
} | 
	{
  "code": [
    "pandas/core/dtypes/cast.py",
    "pandas/core/groupby/ops.py"
  ],
  "doc": [],
  "test": [
    "pandas/tests/groupby/aggregate/test_cython.py",
    "pandas/tests/arrays/integer/test_arithmetic.py",
    "pandas/tests/resample/test_datetime_index.py",
    "pandas/tests/groupby/test_function.py"
  ],
  "config": [],
  "asset": []
} | 1 | 
This repository hosts MULocBench, a comprehensive dataset.
MULocBench addresses limitations in existing benchmarks by focusing on accurate project localization (e.g., files and functions) for issue resolution, which is a critical first step in software maintenance. It comprises 1,100 issues from 46 popular GitHub Python projects, offering greater diversity in issue types, root causes, location scopes, and file types compared to prior datasets. This dataset provides a more realistic testbed for evaluating and advancing methods for multi-faceted issue resolution.
1. Downloads
Please download the benchmark from https://huggingface.co/datasets/somethingone/MULocBench/blob/main/all_issues_with_pr_commit_comment_all_project_0922.pkl
If you’d like to view the data directly, you can download the JSON file and then open the json file using your browser
2. How to Use DataSet
import pickle
filepath = "input the data path, e.g., all_issues_with_pr_commit_comment_all_project_0922.pkl"
with open(filepath, 'rb') as file:
    iss_list = pickle.load(file)
    for ind, e_iss in enumerate(iss_list):
        print(f"{ind} issue info is {e_iss['iss_html_url']}") # print issue html url
        for featurename in e_iss:
            print(f"{featurename}: {e_iss[featurename]}") # print feature_name: value for each issue
3. Data Instances
An example of an issue is as follows:
organization: (str) - Organization name.
repo_name: (str) - Repository name.
iss_html_url: (str) - Issue html url.
title: (str) - Issue title.
label: (str) - Issue lable.
body: (str) - Issue body.
pr_html_url: (str) - Pull request html url corresponding to the issue.
commit_html_url: (str) - Commit html url corresponding to the issue.
file_loc: (dict) - Within-Project location information.
own_code_loc: (list) - User-Authored location information.
ass_file_loc: (list) - Runtime-file location information.
other_rep_loc: (list) - Third-Party-file location information.
analysis: (dict) - Issue type, reason, location scope and type.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the issue.
- Downloads last month
- 75