content
stringlengths 240
2.34M
|
---|
<issue_start><issue_comment>Title: cURL can't verify ssl certificates (possibly only on Yosemite)
username_0: I'm running Yosemite beta 2, I have PHP-5.6 installed via homebrew compiled against homebrew-curl and homebrew-openssl. Guzzle can't verify SSL certs. For example:
~~~
$client = new \GuzzleHttp\Client();
$response = $client->get('https://google.com');
return $response;
~~~
results in a RequestException with error message `cURL error 51: SSL: certificate verification failed (result: 5)`
<issue_comment>username_1: So, has anyone figured out how to solve this?
<issue_comment>username_2: @username_1 What isn't working for you? Did you see https://github.com/guzzle/guzzle/issues/819#issuecomment-59809599?
<issue_comment>username_0: @username_1 as @username_2 said, this should be working now? If you use homebrew make sure that the curl command points to the system version. You can check with the output from `which curl`.
<issue_comment>username_3: I was getting this error 60 with PHP 5.6.20 on 10.11.4 with guzzle 6.2.0, tracked it down to my libcurl (7.48 from homebrew) being compiled with libressl. I recompiled using the default openssl and the problem went away.
<issue_comment>username_4: although this issue was closed, but I hope my answer can help the later googlers.
use this command to check the **Certificate chain**
`openssl s_client -host {the-site-with-tls-problem} -port 443`
then check whether the issuer's **Root Certificates** exist in your system, if not add the Root Certificate to your own system.
<issue_comment>username_5: check your ~/.gitconfig file
I fixed my problem by temporarily removing the lines mentioning sslCAInfo and sslVerify.
then don't forget to add them later case the certificate is one that you are using for something else.
<issue_comment>username_6: OSX EL Capitan (10.11.1)
PHP: 5.6.27
Guzzle: 6.2.2
I have the same issue:
Works: `curl https://google.com`
Fails: `curl --cacert /usr/local/etc/openssl/cert.pem https://google.com`
with error
```
curl: (51) SSL: certificate verification failed (result: 5)
```
Tried and *didn't work*:
1. $client = new Client(['defaults' => ['verify' => true]]);
2. $client = new Client(['defaults' => ['verify' => false]]);
3. $client = new Client(['defaults' => ['verify' => '/usr/local/etc/openssl/cert.pem']]);
4. $client = new Client(['defaults' => ['verify' => {cert from http://curl.haxx.se/docs/caextract.html}]]);
5. Latest curl installed with homebrew also fails
<issue_comment>username_7: Updating PHP did the thing
<issue_comment>username_8: @username_6 I have the same problem, did you manage to fix it?
<issue_comment>username_9: @username_7 From which php version to which did you update? |
<issue_start><issue_comment>Title: strip mux timestamps to parse logs
username_0: When starting the semtech mux, in order to properly parse logs messages passed back over stdout from the port we need to strip the timestamps generated by the mux to pattern match the log level and dispatch accordingly.
<issue_comment>username_1: I copy-pasted the logging logic from gateway-rs so this should make the output match, as far as I know. |
<issue_start><issue_comment>Title: Ensure VirtualBox virtual disks are stored in the correct directory on Windows hosts
username_0: This change is a fix for https://github.com/mitchellh/vagrant/issues/3584.
<issue_comment>username_1: This bug still exists in vagrant version: 1.7.2
<issue_comment>username_2: If this was fixed in a 1.x.x version, then re-appears in a 2.x.x version (I am using Vagrant 2.2.3) would this be a regression issue? I will try using an older version of Vagrant to see if the VMDK path issue was fixed then re-added.
<issue_comment>username_3: Same problem here.
Vagrant 2.2.3 + VirtualBox 6.0.2 & 6.0.4 = PROBLEM OCCURS
Vagrant 2.2.3 + VirtualBox 5.2.22+ = OK |
<issue_start><issue_comment>Title: www.composeregistry.com search
username_0: Currently the compose search engine is fixed to https://www.composeregistry.com.
Could this be may configurable?
<issue_comment>username_1: Hi, there is no option to configure the docker-compose registry at the moment.
The problem is that there isn't a "standard" API, so you should implement an API like this: https://www.composeregistry.com/documentation
However...I plan to release the docker compose registry source code soon (I am the maintainer of Compose Registry), so it will be possible to have a self hosted instance. Then I will add an option to configure the registry URL in docker-compose-ui.
<issue_comment>username_0: That would actually be great. Thanks.
<issue_comment>username_1: Hi @username_0 I've been working on a registry to be released as open source in the next days.
would you like to help me testing it? thanks
<issue_comment>username_0: Yes, sure, no problem. Just send me the info.
<issue_comment>username_1: please send me an email so that I can add you as contributor
<issue_comment>username_0: My GitHub accountname is username_0.
Email address is on the cc.
<issue_comment>username_2: @username_1 any update(s) on this? I'd be OK with just a configuration option to set the registry URL. Then, I can just implement the API endpoints I need in a custom app.
<issue_comment>username_1: Hi @username_2
I didn't release the registry (yet) due to lack of time to work on it (I've made some tests with the help of @username_0 but I would like to improve the documentation before releasing it).
I would really appreciate it If you could contribute to the project.
Meanwhile, I've created an experimental branch: [custom-registry](https://github.com/username_1/docker-compose-ui/tree/custom-registry) to customize the docker-compose registry URL (you just have to add a `DOCKER_COMPOSE_REGISTRY` env). The feature is not finished yet but it should work.
<issue_comment>username_2: :+1:
That looks like exactly what I need for my use case. I'll check it out and let you know how it works.
<issue_comment>username_1: Hello @username_0 and @username_2
I've created a basic docker compose registry implementation: https://github.com/username_1/compose-registry and a swagger specification: https://raw.githubusercontent.com/username_1/compose-registry/master/api/swagger.yaml
Instructions:
```
docker run --rm \
--name compose-registry \
--volume $(pwd)/demo-projects/:/projects/:ro \
--publish 8080:8080 \
--read-only \
username_1/compose-registry:0.2
```
```
docker pull username_1/docker-compose-ui
```
```
docker run --name docker-compose-ui \
-p 5000:5000 \
-e DOCKER_COMPOSE_REGISTRY=http://my-compose-registry:8080 \
--link compose-registry:my-compose-registry \
-w /opt/docker-compose-projects/ \
-v /var/run/docker.sock:/var/run/docker.sock \
username_1/docker-compose-ui
```
you can customize the current nodejs registry implementation of create a new one using another language/framework from the swagger specification following this documentation: https://github.com/username_1/compose-registry
Any feedback is welcome.
<issue_comment>username_3: www.composeregistry.com is long down and parked at GoDaddy. This issue could be renamed to *custom compose registry* or alike.<issue_closed> |
<issue_start><issue_comment>Title: DBAL-968 - Rebuilt doModifyLimitQuery in SQLServerPlatform and fixed invalid test cases.
username_0: The recent change to SQLServerPlatform.php (https://github.com/doctrine/dbal/commit/17dad30dc9acd91a5cda0da2c5ce2c40d522f766) broke the ORM Paginator's queries on SQL server.
I investigated, and found that some of the test cases for the SQL Server platform weren't actually correct SQL. Also, there were no test cases that covered what the paginator is doing, so I've written test cases for those. I will open a pull request for this issue.
The modifyLimitQuery method in SQLServerPlatform.php should be fixed to pass the fixed old tests and the new tests.
My concern is that that method is becoming too complex, but that's an issue for another day.
<issue_comment>username_0: Not needed due to a better solution. See #818 |
<issue_start><issue_comment>Title: CentOS 6 build fails without perl-ExtUtils-MakeMaker RPM
username_0: make[1]: Leaving directory `/var/cache/omnibus/src/collectd-5.2.1/bindings'
STDERR: Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 1.
BEGIN failed--compilation aborted at Makefile.PL line 1.
make[2]: *** [buildperl/Makefile] Error 2
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1
---- End output of make ----
Ran make returned 2
<issue_comment>username_1: if centos:
yum install perl-devel perl-CPAN
<issue_comment>username_2: @username_1 in CentOS6, `perl-devel` alone fixed cpanm local installation. Thank you. |
<issue_start><issue_comment>Title: [WIP] Refactor tf_keras_models
username_0: Refactors `deepchem/models/tf_keras_models`. Moves new layers to `dc.nn`. Attempts to minimize Keras dependencies (to fix #345 and #296) by limiting all Keras dependencies to `dc.nn` and explicitly copying Keras code that DeepChem depends on into `dc.nn`.
Will take a few iterations before tests pass.
<issue_comment>username_0: Changes in this PR:
- Removes `deepchem/deepchem/models/keras_models/`
- Moves `deepchem/deepchem/models/tensorflow_models/model_ops.py` to `deepchem/nn/model_ops.py`.
- Copies many Keras-TF functions from `keras/keras/backend/tensorflow_backend.py` to `deepchem/nn/model_ops.py`
- Copies many Keras layers from `keras/keras/engine/topology.py` and `keras/keras/layers/core.py` to new files `deepchem/nn/copy.py`. Have stripped out unneeded functionality, like support for masking and support for sparse tensors. Philosophy is to only support functionality needed for DeepChem models.
- Add in `deepchem/nn/activations.py`, `deepchem/nn/initializations.py`, `deepchem/nn/regularizers.py` and `deepchem/nn/constraints.py`. Not sure how much of this will be needed moving forward.
- Remove all keras imports from files and install from `.travis.yml`
- Unfortunately, this now leads in a `six` dependency, but this is pretty lightweight. |
<issue_start><issue_comment>Title: How to build sdl for embedded linux environemt
username_0: we have set up yocto environment on an i.MX6 evaluation board, and would like to run sdl on that board. Is there any guide or demo showing how to modify the CMake thus to build the sdl on embedded linux board?
Thanks.
<issue_comment>username_1: We are working on getting sdl setup to run in a yocto environment. Once we have it running I will let you know about what changes were made in the cmake file.
What architecture type does your eval board use?
<issue_comment>username_0: Great thanks.
The main cpu is i.MX6dl with dual ARM Cortex A9 at 1G/1.2G Freq.
We are also working on migrating the sdl to this embedded yocto environment but met many problems.
Looking forward to your good news.
<issue_comment>username_2: PR for AGL Port: #1159<issue_closed> |
<issue_start><issue_comment>Title: Added description for bumblebee2 and flea3.
username_0: Common models with simulation and basic test to close #30.
<issue_comment>username_1: LGTM
<issue_comment>username_1: Looks awesome, thanks for contributing this!
<issue_comment>username_0: You saw the Gazebo world?
Regards,
Tony Baltovski, B.Eng
Email: [email protected]
Mobile: +16474016657 |
<issue_start><issue_comment>Title: stops after ICMP redirect
username_0: tracetcp, alas, ends after it receives an icmp redirect. (tracert and tcptracroute follow that redirect and give results).
I know I can work around that using -g, but that leaves me with the task of finding out where the hell the redirection went to.
So this would be the feature request:
- follow icmp redirects
- if not possible, give more info about target of redirect
Best regards, m.
<issue_comment>username_1: Hi, ICMP redirects have been on my todo list for a while, you are the first person to ask about them in over ten years :-)
I'll take a look at supporting them.
<issue_comment>username_0: Uh, sorry for having interrupted that blissful state :) I was probably naive thinking the tcp stack might handle that almost on its own. Yeah, would be a nice feature.
<issue_comment>username_2: Have had the same problem occur - will it be possible to address this issue? |
<issue_start><issue_comment>Title: abort if the status update thread has stopped running
username_0: @tpetr the addition to the status update changes we were talking about
- Abort if there isn't a status update thread running since status updates are such a core piece of things
- Update the InterruptedException on adding to the queue to just keep going. Since we won't have acked the status update, it will just be sent again and we'll get it on the next go.
<issue_comment>username_0: merging this into `status-update-thread` branch now that both are in hs_qa |
<issue_start><issue_comment>Title: ValueError: total size of new array must be unchanged - Using Mnist Dataset
username_0: The code from the load.py is generating error of "total size of new array must be unchanged". It just loads the mnist dataset to an array and then reshaping it. The error occurs while reshaping it (in 3rd line) and it is shown below:
fd = open('C:\\Users\\***\\Desktop\\MNIST Dataset\\train-images.idx3-ubyte.gz')
loaded = np.fromfile(file=fd,dtype=np.uint8)
---> trX = loaded[16:].reshape((60000,28*28)).astype(float)
ValueError: total size of new array must be unchanged
I know what the function of reshape is doing. I just didn't figure it out that how to resolve this error. I have tried many things but didn't work in my favour. Can anyone suggest me any solution?<issue_closed>
<issue_comment>username_1: how you fix this error? |
<issue_start><issue_comment>Title: Enter key when i write an invalid matche
username_0: When i write an invalid matche and i type `Enter` key, and after this i try to write again there is an issue. I think that the `At` is not removing the selection of the before node.
<issue_comment>username_1: :+1:
<issue_comment>username_2: What is the issue?
Please give me a demo.
<issue_comment>username_3: π
Similarly, if you have the option highlightFirst: false, and you begin a match then hit ENTER, the span atwho-query is note removed and causes bugs when you write again.
<issue_comment>username_0: @username_2 Do you understood?
I resolve this issue not allowing type ENTER into a highlight, but its not a good practice.
<issue_comment>username_4: Hi, I having the same problem, any solution?
<issue_comment>username_0: Off course @username_4,
But im using the [rangy](https://github.com/timdown/rangy) lib to work with the node in contenteditable.
Follow some methods that i create to do it.
```
constructor() {
this.contentEditor.on("keydown", this.onKeyDownEditor)
}
getElementNodeSelected(){
let sel = rangy.getSelection();
let rangeSelection = sel.getRangeAt(0);
return rangeSelection.commonAncestorContainer.parentNode;
}
isHighLightNode(node){
return node.tagName === 'SPAN' ? true : false;
}
onKeyDownEditor(event){
let keyCode = event.keyCode;
let selectedNode = this.getElementNodeSelected();
if (keyCode === 13) {
if (this.isHighLightNode(selectedNode)){
event.preventDefault();
}
}
}
``` |
<issue_start><issue_comment>Title: mdCard: Flex and layout-wrap Causing Premature Wrapping
username_0: **Actual Behavior**:
- `What is the issue? *` When using multiple md-cards in a row layout combined with layout-wrap, the last card of the row is being wrapped prematurely.
- `What is the expected behavior?` As long as the combined flex values sum to 100 (or under), they should remain on the same line (eg 5x20, 4x25, 3x33).
**CodePen** (or steps to reproduce the issue): *
- `CodePen Demo which shows your issue:` http://codepen.io/anon/pen/pbaGqO
- `Details:` Experimenting with different flex values, it appears to work when the flex sum is 95 or less, but no combination greater. You can try any combination of flex values and see the issue occur.
**Angular Versions**: *
- `Angular Version:` 1.5.5
- `Angular Material Version:` 1.0.9 and 1.1.0-rc5
**Additional Information**:
- `Browser Type: *` Chrome
- `Browser Version: *` 50.0.2661.102 (64-bit)
- `OS: *` Mac OSX 10.11.5 (15F34)
Thanks.
<issue_comment>username_1: I believe this is a padding/margin issue.
<issue_comment>username_2: Flexbox does not take the padding of the child elements into consideration when calculating flex breakpoints. Decreasing `max-width` on each breakpoint alleviates this problem for me. See [this codepen](http://codepen.io/anon/pen/oYjqWy).
<issue_comment>username_3: I'm experiencing this issue. If you have multiple breakpoints then the above fix will simply use the last override. Would be good if `layout-wrap` accommodated cards (or vice-versa). |
<issue_start><issue_comment>Title: vinyl: vy_page_read must not re-read xlog meta every time
username_0: xlog meta must be read only once during recovery. vy_read_iterator() must not cause reading of xlog_meta_parse():
```
2 (00007f6748cbb180) :0:__GI__IO_sputbackc
(00007f6748c99978) :0:__GI__IO_vfscanf
(00007f6748caff26) :0:_IO_vsscanf
(00007f6748caa2f6) :0:__GI_sscanf
(0000000000418d4c) ??:0:xlog_meta_parse <!-- WTF
(000000000041bb10) ??:0:xlog_cursor_openfd
(00000000004640db) ??:0:vy_read_iterator_use_range
(0000000000473d8c) ??:0:vy_read_iterator_next
(0000000000474097) ??:0:vy_get
(0000000000451fa5) ??:0:vinyl_insert_secondary
(00000000004521b7) ??:0:vinyl_replace_all [clone .isra.6]
(000000000045258d) ??:0:VinylSpace::executeReplace
(000000000048ab32) ??:0:process_rw
(000000000048c6bc) ??:0:box_process1
(00000000004a0d27) ??:0:lbox_replace
(00000000004d86e6) ??:0:lj_BC_FUNCC
(000000000049e476) ??:0:execute_lua_call
(00000000004d86e6) ??:0:lj_BC_FUNCC
(00000000004e9028) ??:0:lua_cpcall
(00000000004c0d52) ??:0:lbox_cpcall
(000000000049e986) ??:0:box_lua_call
(000000000048e58d) ??:0:box_process_call
(0000000000410dd1) ??:0:tx_process_misc
(00000000004cd8c4) ??:0:fiber_pool_f
(000000000040ea0b) ??:0:fiber_cxx_invoke
(00000000004cafbf) ??:0:fiber_loop
(00000000005ef92e) ??:0:coro_init
(ffffffffffffffff) ??:0:0xffffffffffffffff
```
<issue_comment>username_0: Please ensure that:
- [ ] Metdata of .index file is checked on recovery
- [ ] meta->filetype of .index and .run files is checked on recovery<issue_closed> |
<issue_start><issue_comment>Title: Can't run on last chrome
username_0: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari/537.36 | WebGL 1.0 (OpenGL ES 2.0 Chromium) | WebKit | WebKit WebGL | WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium) Three.js:264
WebGL: INVALID_OPERATION: getAttribLocation: program not linked Three.js:741
WebGL: INVALID_OPERATION: getAttribLocation: program not linked Three.js:741
WebGL: INVALID_OPERATION: getUniformLocation: program not linked Three.js:741
WebGL: INVALID_OPERATION: getUniformLocation: program not linked Three.js:741
WebGL: INVALID_OPERATION: getUniformLocation: program not linked Three.js:741
WebGL: INVALID_OPERATION: getUniformLocation: program not linked Three.js:741
WebGL: INVALID_OPERATION: getUniformLocation: program not linked Three.js:741
WebGL: INVALID_OPERATION: getUniformLocation: program not linked Three.js:742
WebGL: INVALID_OPERATION: getUniformLocation: program not linked Three.js:742
WebGL: INVALID_OPERATION: getUniformLocation: program not linked Three.js:742
DEPRECATED: Camera hasn't been added to a Scene. Adding it... Three.js:283
Could not initialise shader
VALIDATE_STATUS: false, gl error [0] Three.js:340initMaterial Three.js:340m Three.js:241renderBuffer Three.js:275update Three.js:752render Three.js:748h Three.js:239render Three.js:283animate main.js:302
9WebGL: INVALID_OPERATION: getUniformLocation: program not linked Three.js:341
18WebGL: INVALID_OPERATION: getAttribLocation: program not linked Three.js:342
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
Could not initialise shader
VALIDATE_STATUS: false, gl error [1282] Three.js:340
38WebGL: INVALID_OPERATION: getUniformLocation: program not linked Three.js:341
18WebGL: INVALID_OPERATION: getAttribLocation: program not linked Three.js:342
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
[Truncated]
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:281
WebGL: INVALID_OPERATION: useProgram: program not valid Three.js:242
2WebGL: INVALID_OPERATION: drawElements: no valid shader program in use Three.js:282
WebGL: too many errors, no more errors will be reported to the console for this context.
<issue_comment>username_1: The same here at: http://username_2.github.io/fractal-terrain-generator/demo/<issue_closed> |
<issue_start><issue_comment>Title: std.mem.zeroInit fails to initialize a field
username_0: ### Zig Version
0.9.0
### Steps to Reproduce
[This test case](https://gist.github.com/username_0/ceced9788167c13a32e1992306a19ad7) illustrates a field initialization failure.
### Expected Behavior
All fields should be initialized to initializer, default value, or zero.
### Actual Behavior
A field with a default value is not being set to that value.
<issue_comment>username_1: Intended behaviour. Please read doc comments.
https://github.com/ziglang/zig/blob/d15bbebe2e6b8fcbfcd730a6c0d1be621b27045d/lib/std/mem.zig#L423-L426
```
/// Initializes all fields of the struct with their default value, or zero values if no default value is present.
```
<issue_comment>username_1: Can reproduce this on master (0.10.0-dev.1618+7ae22813e)
<issue_comment>username_0: I happened to test with a packed struct, but same thing with a plain struct.
<issue_comment>username_1: @username_2 This should be tagged as a std bug, not a stage1 bug, right?
<issue_comment>username_2: Looks like a packed struct bug to me.
<issue_comment>username_1: Can't be, since it occurs with regular structs as well, as @username_0 stated. I can repro on my system regarding this. Unless it's a bug with `@typeInfo`?
<issue_comment>username_0: @username_2 nope, same bug with regular structs.
<issue_comment>username_1: I remember actually encountering but overlooking this bug. For some reason, there isn't actually a test case covering this at all.
<issue_comment>username_2: Well the code in `zeroInit` seems to be doing the correct thing so it is still a stage1 bug.
<issue_comment>username_1: Nope it's not -- I've found the issue. `.{}` is a tuple. This causes `zeroInit` to branch here: https://github.com/ziglang/zig/blob/d15bbebe2e6b8fcbfcd730a6c0d1be621b27045d/lib/std/mem.zig#L435-L440, and so never returns with default values. There should be a check for whether this tuple is zero-length.
<issue_comment>username_1: There should also arguably be a check to determine whether the tuple has the correct number of fields, which currently is not implemented.
<issue_comment>username_2: Ah, I see I was looking at the struct path.
<issue_comment>username_0: I haven't done any compile time programming yet, but that seems like an easy mistake to make.
<issue_comment>username_0: Yes, because that is the way to zero memory with the just defaults and no additional field initialization.
<issue_comment>username_1: No I wasn't talking about your usecase there. I was thinking more generally about the use of tuples there, but I see the usecase for that too now.<issue_closed> |
<issue_start><issue_comment>Title: Baby steps mode
username_0: This turns js/id/actions into CommonJS modules and creates js/id/actions.js with browserify. Refs #891 - this is an approach that would let us gradually transition to modules, by porting them into `modules/`, including them from `dist/modules`, and gradually introducing `require()`
cc @username_2 for the review
<issue_comment>username_1: thats awesome @username_0 βοΈβοΈβοΈ
<issue_comment>username_0: Re-ran the tests and they pass now. @username_1 and @username_2 want to review, see if this approach will let us do the gradual path? Given that other PRs will land soon, I want to avoid conflict city as soon as someone PRs anything related to actions.
<issue_comment>username_2: I kind of feel weird generating the file into `dist/modules/actions.js`. Anything we put into `dist`, people might build on and expect to stick around.
Would it make more sense to put it under something like `js/lib/modules` instead? When this is done, most of `js/lib` will go away and be replaced by modules, right?
<issue_comment>username_2: Also, per chat with @username_1, any thoughts on using [rollup](http://rollupjs.org/) instead of browserify? It seems to support more kinds of module patterns, maybe could be useful if we ever want proper es6 modules. Downside is it's newish and doesn't have great documentation.
We aren't opinionated about this, just wondering if you have experience with it.
<issue_comment>username_0: I'm totally open to it as an experiment, but I have doubts. Rollup supports _only_ ES6 modules on the way in, and [ES6 modules, while great, have a very uncertain future](https://medium.com/@nodesource/es-modules-and-node-js-hard-choices-2b6995e4d491?utm_source=javascriptweekly&utm_medium=email). There's zero node.js support, and zero browser support, so any node-based or browser-based iD tests, even of pure, non-browser dependent code, would need transpilation. The same with if we ever wanted to provide iD components as standalone modules: they would always need to go through rollup in order to have a `main` entry that's loadable in node. And if we want to provide modules that provide 'es6 goodness' and let people take advantage of the tree-shaking abilities of rollup, we'd need both a transpiled `main` and non-transpiled `jsnext:main`.
The benefits of ES6 modules - slightly smaller filesize and enhanced export semantics - are real, but I'm not convinced that they outweigh the cost of integrating with any other JavaScript and the uncertainty around the standard's future.
But on the other hand Β―\_(γ)_/Β― it could be fun
<issue_comment>username_2: Thanks for helping with this @username_0!
I moved the intermediate file from `dist/modules` to `js/lib/id`.. For now this is less disruptive to downstream projects like openstreetmap-website.
Sticking with `browserify` for now - we can switch to something else later if it makes sense to do so. |
<issue_start><issue_comment>Title: 404 nothing found!
username_0: https://github.com/jsxc/jsxc/wiki/Install-sjsxc-(SOGo)
On this page when we try to click on the releases, this leads to 404 nothing found page.
<issue_comment>username_1: Sorry. I updated the article accordingly. The right url is https://github.com/jsxc/jsxc.sogo/releases<issue_closed> |
<issue_start><issue_comment>Title: Run in parallel mode with JUnit
username_0: See http://groups.google.com/group/cukes/msg/12ad0e5b2f2a846a
<issue_comment>username_1: Hi username_0.. i have found a neater way of implementing cucumber jvm parallel runs. i have small POC ready.. Let me know how can i send it for review?
<issue_comment>username_0: Just send a pull request or put it in a new repo on github.
<issue_comment>username_1: Hi username_0.. I've forked the project, added the POC related changes to the JUnit project and commited it to forked source here: https://github.com/username_1/cucumber-jvm.git. Look at package: cucumber.api.junit.parallel. Let me know your views if this approach is doable further, then i can work on the same and get this feature implemented properly.
Regards Mrunal
<issue_comment>username_1: Hi username_0.. I've forked the project, added the POC related changes
to the JUnit project and commited it to forked source here:
https://github.com/username_1/cucumber-jvm.git. Look at package:
cucumber.api.junit.parallel. Let me know your views on it. If this approach
is doable further, then i can work on the same and get this feature
implemented properly.
Regards Mrunal
<issue_comment>username_0: Many people use Cucumber without using the JUnit runner. Core functionality (such as parallel execution) should not depend on JUnit.
Another thing we need to handle well is results - if there are many threads running scenarios, the results should still be collected and reported in the same way as if there were only one thread.
We'll implement this our own way, based on Gherkin3's pickles.
<issue_comment>username_1: Ok..sounds Fair enough.. i'll probably keep it in some kind of util and
share it if anyone is interested till the time gherkin3 comes alive.
Regards
Mrunal |
<issue_start><issue_comment>Title: Add Rocker template benchmark
username_0: Added a benchmark for Rocker templates. Based on my executions, Rocker is the *fastest* template engine per your benchmarks.
https://github.com/fizzed/rocker
<issue_comment>username_1: Oh man, it sucks getting bumped down to second but congratulations. I've been watching Rocker for awhile now, it looks really great. I'll have to step up my game.
Hopefully this weekend I'll have some time to review and merge this PR.
<issue_comment>username_1: Rocker.java doesn't compile for me? I get an error on line 30.
Rocker.java:[30,24] error: package templates does not exist
<issue_comment>username_1: Nevermind, I merged poorly |
<issue_start><issue_comment>Title: Add Style Guide
username_0: Since I know I'm a stickler about code style we should have a style guide that we can point to when asking people to clean up contributions. A PR can still be merged without following it, but only if someone on @TabletopAssistant/dicekit cleans it up right after.
<issue_comment>username_0: @TabletopAssistant/dicekit do we feel this is needed for release [0.1](https://github.com/TabletopAssistant/DiceKit/milestones/0.1:%20It%20Starts%20with%20a%20Single%20Die)?
<issue_comment>username_1: No, since 0.1 was originally defined as everything needed to be used by others. Contributing is not necessary for usage. :)
<issue_comment>username_0: Good call. This can get in whenever then, which is good, because I would rather get 0.1 and 0.2 out ASAP so I can start talking about the project more broadly... though I might get the style guide done before that because before people contribute I would want this in place (I think).
<issue_comment>username_2: Also agreed here. Not needed yet but will be valuable eventually. |
<issue_start><issue_comment>Title: False positive when var is used on lhs of regex match
username_0: This code fails its test with:
# $entaliasmappingidentifier is used once
```
my $entaliasmappingidentifier = $datahash->{entAliasMappingIdentifier};
# only match ifIndex mappings
if ( my ($snmpid_interface) = ( $entaliasmappingidentifier =~ m/^ifIndex\.(\d+)$/xms ) ) {
$entity_interface->{$entity_snmpid} = $snmpid_interface;
}
```
<issue_comment>username_1: This bug has also hit me today. This sub:
```
sub set_file_type
{
my ($self, $type) = @_;
if ($type =~ /^handle$/i) {
$self->{file_type} = 'handle';
} else {
$self->{file_type} = 'name';
}
}
```
fails with "$type is used once in &CGI::Lite::set_file_type at lib/CGI/Lite.pm line 723" where line 723 is the declaration of $type, so it seems to be ignoring the regex line. I'm using the latest version of Test::Vars (0.008) and this failure **only occurs when running under perl 5.22**. The test passes on 5.20, 5.18, 5.14 and 5.10.
Hope this extra info helps to narrow it down.
<issue_comment>username_2: I have the same issue in the Pod::Spell distribution, in sub `Pod::Wordlist::learn_stopwords` ([source](https://metacpan.org/source/DOLMEN/Pod-Spell-1.18/lib/Pod/Wordlist.pm#L28)): the `$text` variable is reported as being used only once with Test::Vars 0.08.
```
sub learn_stopwords {
my ( $self, $text ) = @_;
...
while ( $text =~ m<(\S+)>g ) {
...
}
}
```<issue_closed> |
<issue_start><issue_comment>Title: Support secret parameters featrue
username_0: Fluentd 0.12.13 or later supports secret parameter feature.
This feature works as below:
```log
<match mysql.input>
type mysql_bulk
host localhost
database test_app_development
username root
password xxxxxx
column_names id,user_name,created_at,updated_at
table users
flush_interval 10s
</match>
</ROOT>
```
<issue_comment>username_1: Hmm..What does work this feature with old version?
<issue_comment>username_0: When using older version of fluentd, this secret parameter causes no effect. It will be simply ignored.
<issue_comment>username_1: Ah, that's right.
No problem!
:+1:
<issue_comment>username_0: Thanks!
<issue_comment>username_1: update version 0.0.7.
pushed rubygems.
https://rubygems.org/gems/fluent-plugin-mysql-bulk |
<issue_start><issue_comment>Title: the new rel-urls key is making mf2py fail all the tests
username_0: Should I add it to the results and send a pull request?
http://testrunner-47055.onmodulus.net/run/mf2py/all/
background: http://microformats.org/wiki/microformats2-parsing-brainstorming#more_information_for_rel-based_formats
<issue_comment>username_1: Hi Kevin
I am not sure if it should be part of the main microformat-v2 tests group at the moment.
I consider anything that makes onto parsing rules page http://microformats.org/wiki/microformats2-parsing to be the strict structure of output. Once its talked through with community and its gets agreement to be move from brainstorming to parsing rules, then fine it should be in there.
For now we could add it as experimental group to the tests. i.e. a new directory in 'tests' call experimental with pairs of HTML and JSON files for each test.
pull-request with test would help. If its a new test group can you also add a change-log.html file along the line of whats already there. I am going to write up a little guide for contribution soon.
I will also look at way of allow parsers to support new experimental top level structures without all the tests throwing errors
<issue_comment>username_2: Noting this issue on the brainstorming doc - so I can resolve it once I've made the edit to microformats2-parsing.
<issue_comment>username_1: Tantek moved rel-urls into the parser instruction and out of brainstorming, so they are in the latests tests<issue_closed> |
<issue_start><issue_comment>Title: Application under sub path
username_0: I'm hitting a wall while trying to implement a new app under a sub path. I apologise for bringing this up as an issue, as I don't think it is an issue per se, but I couldn't see any other means of communicating.
I'd like to have my application completely handled under the path `/authorize`, yet I can't for the life of me figure out how to get this working. I understand that I'll need to update the location of the `src` files, and then the various configuration files in the gulp directory, but it feels like I can't quite get the right combination of configuration.
Is it possible to get a helping hand, or some guidance in the right direction?
<issue_comment>username_1: I have an app deployed under a context right now using the generator. The context or sub path is simply an alias in the HTTP serveur.
In dev, I keep the app in the root context only being very cautious that I have all my links in relative and all is working fine.
But I must admit though, I didn't success to configure browser sync to simulate the context sub path in dev.
<issue_comment>username_0: Thanks for the response. It was the links to bower_components that I was finding hard to configure in dev. I know it's kind of an anti pattern, but I was trying to serve files through my own Go server to provide some consistency with the final URL structure & setup but I think after all this I might just have to try simplifying things a little bit.
Cheers<issue_closed> |
<issue_start><issue_comment>Title: add support for custom adapter registration
username_0: I would like to add functiontion to allow camo to register custom adapter. It means that I can develop my own adapter (such as CouchDB, PouchDB) without touching camo code.
<issue_comment>username_1: Interesting, this would match our use case exactly. Would love to see this fleshed out a bit more. @username_0, did you make any additional progress on this? |
<issue_start><issue_comment>Title: Bisect output sometimes incomplete on failure
username_0: Example: https://bugs.chromium.org/p/chromium/issues/detail?id=635513#c3
```
===== BISECT JOB RESULTS =====
Status: failed
===== TESTED REVISIONS =====
Revision Mean Std Dev N Good?
chromium@409796 4.0431 0.0280426 5 good
chromium@409804 4.03511 0.109318 8 good
chromium@409805 4.06369 0.0826172 5 good
chromium@409806 3.60221 0.169349 5 bad
chromium@409808 3.75538 0.0575883 8 bad
chromium@409811 3.72238 0.0346838 5 bad
...
```
Problems with this output:
* It doesn't give any failure reason.
* It doesn't clarify that it may just have partial results
* It's pretty clear that r409806 is the culprit, but it doesn't cc the author, give confidence, print the CL info, etc.
<issue_comment>username_0: Another example: https://bugs.chromium.org/p/chromium/issues/detail?id=635464#c3
<issue_comment>username_1: I've seen this happen at least once when the bisect should have descended into a __Skia roll__: https://bugs.chromium.org/p/chromium/issues/detail?id=633941#c5
<issue_comment>username_2: The failure reason for the both the jobs mentioned in this issue are due to the exception in remore_run_recipe step. https://build.chromium.org/p/tryserver.chromium.perf/builders/mac_10_10_perf_bisect/builds/2278
But I totally agree that there should be detailed desciption about the bisect failure.
Also seen remote_run_recipe step exception on Android bot https://build.chromium.org/p/tryserver.chromium.perf/builders/android_s5_perf_bisect/builds/898
<issue_comment>username_3: Going to close this, output has changed significantly since this was filed.<issue_closed> |
<issue_start><issue_comment>Title: pull request for e2e test at 12/22/2016 08:26:14
username_0:
<issue_comment>username_0: ### :white_check_mark: Validation status: passed
For more details, please refer to the [build report](https://opbuildstoragesandbox2.blob.core.windows.net/report/2016%5C12%5C22%5C623dccef-8d46-20fb-c28a-9374b0737101%5CPullRequest%5C201612220826209005-16971%5Cworkflow_report.html).
**Note:** If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report. |
<issue_start><issue_comment>Title: don't mistake directories for executables
username_0: - add `assert` package and `rimraf` package for test
* * *
I ran into a problem wherein scripty thought a directory was a script. Unfortunately, I'm unable to reproduce this in the context of the integration tests, though I tried.
- I have a `scripts/test/browser/` dir. That dir has an `index.sh` and a `dev.sh`.
- The `scripts/test/` dir has an `index.sh` which invokes `test:browser`, `test:node`, `test:lint`, etc.
- Somewhere in there, scripty thinks `scripts/test/browser/` is a script to be executed:
```
Error: scripty - failed trying to read "/Volumes/alien/mochajs/mocha-core/scripts/test/browser":
EISDIR: illegal operation on a directory, read
internal/child_process.js:302
throw errnoException(err, 'spawn');
^
Error: spawn EACCES
at exports._errnoException (util.js:890:11)
at ChildProcess.spawn (internal/child_process.js:302:11)
at exports.spawn (child_process.js:367:9)
at /Volumes/alien/forked/scripty/lib/scripty.js:30:17
at /Volumes/alien/forked/scripty/node_modules/async/lib/async.js:718:13
at iterate (/Volumes/alien/forked/scripty/node_modules/async/lib/async.js:262:13)
at async.forEachOfSeries.async.eachOfSeries (/Volumes/alien/forked/scripty/node_modules/async/lib/async.js:281:9)
at _parallel (/Volumes/alien/forked/scripty/node_modules/async/lib/async.js:717:9)
at Object.async.series (/Volumes/alien/forked/scripty/node_modules/async/lib/async.js:739:9)
at /Volumes/alien/forked/scripty/lib/scripty.js:14:11
```
This changes solves the issue.
Also, I included userland `assert`, as using the core `assert` is not recommended by the Node.js team.
<issue_comment>username_1: FWIW, I was actually planning on switching to power-assert tonight, so the second point may end up being moot, but that's not a big deal.
<issue_comment>username_1: LGTM, thanks! :sparkles:
Sorry about the ugliness of that particular test you had to update, btw. It does silly stuff like creating its own files and cleaning them up, which predates having some local fixture directories to try to avoid that. I'll try to clean that up later.
<issue_comment>username_1: Landed in 1.2.2 |
<issue_start><issue_comment>Title: TypeError: Cannot read property 'Observable' of undefined
username_0: Hi everyone,
i try to create an example with es6 syntax.
I meet a problem that i don't understand:
ERROR:
```
angular.js:13920 TypeError: Cannot read property 'Observable' of undefined
at new RxjsService (http://localhost:3001/bundle.js:78661:35)
at Object.rxjsFactory (http://localhost:3001/bundle.js:78683:12)
at Object.invoke (http://localhost:3001/bundle.js:50825:20)
at Object.enforcedReturnValue [as $get] (http://localhost:3001/bundle.js:50664:38)
at Object.invoke (http://localhost:3001/bundle.js:50825:20)
at http://localhost:3001/bundle.js:50624:38
at getService (http://localhost:3001/bundle.js:50771:40)
at injectionArgs (http://localhost:3001/bundle.js:50795:59)
at Object.invoke (http://localhost:3001/bundle.js:50817:19)
at $controllerInit (http://localhost:3001/bundle.js:56461:35) <div ui-view="mainView" class="anim-in-out anim-slide-below-fade ng-scope" data-anim-speed="300">
```
This is my code:
```
class RxjsService {
constructor(rx) {
this.rx = rx;
// this.rxjsFactoryObserver = rxjsFactoryObserver;
console.log(this.rx); // it's undefined
this.rxjsFactory$ = new this.rx.Observable.create((observer) => {
this.rxjsFactoryObserver = observer;
}).share();
this.data = {
users: [
{name: 'John Rambo', isVisible: true, age: 32},
{name: 'Pablo Picasso', isVisible: true, age: 64}
]
};
}
getUsers() {
this.rxjsFactoryObserver.next(this.data.users);
}
addUser() {
this.data.users.push({name: 'John Kennedy', isVisible: true, age: 87});
this.rxjsFactoryObserver.next(this.data.users);
}
static rxjsFactory() {
return new RxjsService();
}
}
RxjsService.$inject = ['rx'];
export default RxjsService.rxjsFactory;
```
And where i import files (index.js):
```
import rx2 from 'rx/dist/rx.all.min';
import rx from 'rx-angular';
// Create our app module
export const demoModule = angular.module('demo', [
'rx'
]);
```
Any idea ? thank you ! :)
<issue_comment>username_1: I wasn't able to inject rx either with a similar setup. I wanted to use observeOnScope. Was on 1.1.3 tried 1.0.4, didn't work so I went back to watches.<issue_closed>
<issue_comment>username_0: oki that work now. Someone help me on the Angular Gitter.
It's because I don't have declare the call of my Service correctly .
Look at this little correction but that change everything because i didin't inject the rx library in my static function in the initial thread.
```
class RxjsService {
constructor(rx) {
'ngInject';
this.rx = rx;
this.rxjsFactoryObserver = null;
this.rxjsFactory$ = new this.rx.Observable.create(observer => {
this.rxjsFactoryObserver = observer;
}).share();
this.data = {
users: [
{name: 'John Rambo', isVisible: true, age: 32},
{name: 'Pablo Picasso', isVisible: true, age: 64}
]
};
}
getUsers() {
this.rxjsFactoryObserver.next(this.data.users);
}
addUser() {
this.data.users.push({name: 'John Kennedy', isVisible: true, age: 87});
this.rxjsFactoryObserver.next(this.data.users);
}
}
function Factory(rx) {
'ngInject';
return new RxjsService(rx);
}
export default Factory;
``` |
<issue_start><issue_comment>Title: Massive Overhaul and Improvements
username_0: I have been working for some time working on a massive set of improvements and updates to the library. I wanted to hold off on submitting a pull request until I stabilized things a little more and re-added some of the functionality I destroyed to make things easier, but it looks like you're back to working on the library in earnest, so I figured I'd open this to try to prevent wasted effort.
This pull request isn't really mergeable as-is: it conflicts with many of your most recent changes, some functionality like the network module and refcounted resources are unavailable, and there's still improvements to be made. It does partially or completely fix many open issues, however, so I would like to bring the changes up for discussion before too much effort is duplicated.
I can understand if some of these changes conflict with your vision for the binding, but I wanted to open things up to discussion. If you want to more easily see an overview of the structure as I have built, I recommend using `cargo doc`. If needed, I can host a copy of these docs.
Breakages:
* The `network` module is missing.
* Reference-counted versions of `Sprite`, `Sound`, etc. are missing.
* Backwards compatibility is completely shot.
Changes and fixes (see the commit list for specifics):
* #31 - view management returns properly borrow-checked references.
* #92 - bit flags are used for TextStyle and WindowStyle.
* #93 - IntRect/FloatRect are `Rect<T>`.
* #94 (partial) - entirely updated to CSFML 2.2, pending release of CSFML 2.3.
* Replaced `system` module with a native Rust implementation, to remove the
`libcsfml-system` dependency.
* Replaced VertexArray with a native Rust implementation wrapping a `Vec<Vertex>`.
* Removed re-exports of enum variants - they are instead scoped to their enum.
* Changed mouse and keyboard functions to be methods on `MouseButton` and `Key`.
* Refactored a lot of incredibly repetitious or otherwise ugly code (such as matches on `SfBool`).
* Reduced `Drawawble` to one method (matching C++) and made it object-safe.
* Added `Foreign` wrapper type to simplify ownership and mutability correctness.
* Heavily tidied `ffi::` modules.
* Fixed broken Unicode handling on Window titles and Text strings.
* Added `Transformable`, `Shape`, `SoundSource`, other traits representing common functionality.
* Changed `rand` crate to only a dev-dependency, since it's only used by examples.
* Implemented `Default`, `Debug`, etc. on more applicable types.
* Added construction from memory and from input streams to all resources.
* Added `SoundRecorder` and `SoundStream` for custom audio input and output.
* Removed unsound `Events` iterator types.
* Other miscellaneous missing functionality.
* Updated all the examples to reflect the changes.
* Reviewed the entire documentation and fixed all problems found.
To do:
* Re-add the `network` module.
* Re-add ref-counted versions of things (possibly with generics).
* Improve ergonomics of Vector2 and Vector3 conversion and use.
* Potentially replace `*2f` methods with either the use of `(x, y).into()` or
generic methods taking `<T: Into<Vector2>>`.
* Add more checks to prevent undefined calls to SFML functions.
* Improve documentation regarding error conditions.
* Possible other internal improvements.
* Help investigate Cargo and Travis configuration.
* Implementation of `Clock` for OSX.
* Fix the incredibly broken indentation (sorry).
* Problems pending CSFML changes:
* Non-window-specific mouse and touch positioning?
* `keyboard::set_virtual_keyboard_visible`
* Missing methods provided only for `SoundRecorder` but not `SoundBufferRecorder`
* Missing license headers on some files.
* Update README.
Thanks for your time.
<issue_comment>username_1: Hey, Thanks for taking time to improve rust-sfml.
I will be happy to merge your improvement in the repository. But It will really improve le process if you can create different pull request for each different feature. For now It's really difficult to review the code.
Moreover open different pull request issue will better to talk about the change as there is some I'm not really okay to merge.
<issue_comment>username_0: Sounds fine. I can work on getting the smaller features or cleanup into separate pull requests, but are there any specifically you'd prefer to see first, or ones you're less interested in?
<issue_comment>username_1: Okay, I think we can begin with the code cleanup you've made, then the code refactor (like Foreign, etc), then continue with missing feature from the binding (SoundRecorder, etc), and finally with the port 2.3.
There is some thing really less interested in but we can discuss about:
* removing of the libcsfml-system dependency, I think it's not a problem to be dependent of it even fo som feature, and I'm not really okay to rewrite functionnality in full Rust, if we use the original lib we are sure users will have the expected behaviour, and we don't have to check at each update if a change is made inside the original implementation.
* Same for VertexArray, same reason.
* Why would you remove Events iterator type, it seems pretty useful I think ?
* I'm not sur we need methods for Key and Mouse, standalone function seems sufficient.
<issue_comment>username_0: `system`: my main justifications are that aside from the one function acting as the `Clock` implementation, everything is trivial math and could benefit from optimization in the absence of an FFI call, and that on Windows it means one less .dll to deal with (admittedly it's a pretty small one).
`VertexArray`: the main argument is ergonomic. In C++ SFML, a `VertexArray` is a trivial wrapper around a `std::vector<Vertex>` and a `PrimitiveType` that exposes an annoyingly small interface instead of just allowing access to the `vector`. IMO, it's easier to work with a `VertexArray` when it's more directly usable like any old `Vec<Vertex>`, and a user of the library won't have to learn even more new API to make use of it. The same argument applies to `ConvexShape`.
The problem with the Events iterator type is more complex. As it stands (containing a `*mut sfWindow`), it's memory unsafe - the compiler will allow the Events to outlive the Window from which it was created, which means bad things if it's used after the Window is destroyed. If Events is instead changed to have a lifetime and refer to the Window from which it was created (making it memory-safe), it becomes less useful: it mutably borrows the Window, so code like this will fail to compile:
```rust
for event in window.events() {
match event {
Event::Closed => window.close(),
_ => {}
}
}
```
Without some other solution, the Events iterator is either unsafe or useless. The `poll_event` solution is less pretty, of course, but at least it works:
```rust
while let Some(event) = window.poll_event() {
match event {
Event::Closed => window.close(),
_ => {}
}
}
```
The Key/Mouse methods are mainly to prevent the need to separately import the free-standing functions, and for general ergonomics (`Key::X.is_pressed()` vs `keyboard::is_key_pressed(Key::X)`).
<issue_comment>username_1: Okay. I stay not convince for sfml-system. For sure we already have rewrite the vector part, for the others, mainly the clock, I cannot test it on windows when I know that the sfml version work, moreover, depends on this library don't seems as a problem as by default this is installed when you install sfml/csfml.
For the VertexArray/ConvexShape I will see your implementation before to think about.
For the Events iterator, I really like this , event if it's a bit unsafe, there is no real reason to destroy the window when you iterate the events, and we can check the window is not null inside the event.
About the Key/Mouse, this use seems really more ergonomic for sure.
<issue_comment>username_0: I am willing to cede on sfml-system. As you say, it's a small and obtained-with-the-rest dependency, and it is a burden to make sure a reimplementation is correct.
As much as the Events iterator is nice, if it's unsafe it's fundamentally pointless. Rust as a language is all about maintaining memory safety - if the Events iterator leaks memory safety without being marked `unsafe` (and thus being quite unergonomic to use), it violates the expectations that the compiler has that `unsafe` blocks maintain Rust's invariants and that the user has that the compiler will be able to catch all memory safety problems - even dumb ones, like using Events after the window is freed:
```rust
let events = Window::new(...).events();
let event = events.next();
```
Of course this is terrible code, but it will segfault if we let it, and the whole point of Rust is to catch this kind of thing at compile time.
<issue_comment>username_2: This pull request is too massive, and accumulated too much merge conflicts to be considered for merging.
However, there are a lot of great ideas here that we should consider individually. |
<issue_start><issue_comment>Title: Live reloading extension point for those who want JS hot loading today?
username_0: I've got a hot reloader for JS that uses this project at the core and I'm curious about how I can help provide an extension point of some kind. I realize not everyone has the ability to do hot reloading of JS and even if they could it's likely they all need something a little different to accommodate based on the frontend tech stack.
For example, in the gif below you will see I'm using ember and at this time (early days yet) I'm firing an event from a ember dev tool that will dispatch it back to my application to re-render the component. I couldn't possibly submit a generic pull request here that unlocks hot reloading for all but I could do everything I needed if a special hook was available.
One "creative" solution I came up with was to monkey patch the `reloadPage` function as it's called at the very end of the reload chain anyway.
```js
window.LiveReload.reloader.reloadPage = function() { //my hot reload JS code }
```
The one missing piece that would require a pull request/ discussion/ etc is that I need the path when this is called and today reloadPage doesn't take any arguments. This was me thinking more quickly on the subject so it's worth mentioning that we could offer an official override-able hook that is called just before reloadPage and it would be provided `path` so users "could opt in" for a custom hot reloading method if they wanted.
I personally like the ability to have an escape valve like this so I can still leverage 99% of the great work in livereload-js while still supporting hot reload for my ember apps w/ ember-cli.
Thoughts? Questions? Next Steps?

https://github.com/username_0/hot-reload-example-app-spike
*keep in mind the above project is a spike to prove out hot reloading with ember-cli |
<issue_start><issue_comment>Title: Websocket for streams?
username_0: Are there any plans to develop websocket facade for streams similar to api gateway for rpc?
So it could allow to expose via websocket
`rpc PingPong(stream Ping) returns (stream Pong) {}`
I would be happy to contribute, if we could agree on requirements...
Or you have better ideas how to stream data bidirectionally between browser and go-micro services, brokers, subscribers?
<issue_comment>username_1: There's no current plans to add websockets or streaming support to the API. The micro web proxy provides support for web sockets which can be useful for anyone who needs that support. On the API side I opted simply for standard http requests which can easily be converted to an RPC format or a reverse proxy is built in otherwise for those who want to deal with the http request. While stream support would be useful, it's not a simple feature and I haven't yet thought of an elegant way for it to be used.<issue_closed> |
<issue_start><issue_comment>Title: Empty required_css_class not handles well
username_0: Here is what I get for default login form:
```html
<label class="control-label {{ no such element: django.contrib.auth.forms.AuthenticationForm object['required_css_class'] }}" for="id_username">Username</label>
```
<issue_comment>username_1: ye it's a bug that could do with fixing but I think it will only appear with `DEBUG = True`, it should be blank in production.
<issue_comment>username_1: Fixed with [v4.1.0](https://github.com/username_1/django-jinja-bootstrap-form/releases/tag/v4.1.0)<issue_closed> |
<issue_start><issue_comment>Title: Configure 'Options' when using 'use_env_variable'
username_0: Hi
Can I configure the sequelize 'logger' when using 'use_env_variable'?
something like this:
```// file config/config.json
{
{
"development": {
"username": "username",
"password": "password",
"database": "db_name",
"host": "127.0.0.1",
"dialect": "mysql",
"logging": false
}
}
```
But use the ''use_env_variable'' so it will look like:
```// file config/config.json
{
{
"development": {
"use_env_variable": "DATABASE_URL"
"logging": false
}
}
```
thank you<issue_closed> |
<issue_start><issue_comment>Title: add a 'glide.rm' command
username_0: Similar to how `glide get my/package/name` adds the package to the `glide.yaml`, computes dependencies, and adds everything to the `glide.lock`, it would be helpful to have a `glide.rm` do the opposite.
Currently, I'm getting around this issue by manually removing the dependency I don't need anymore and then running `glide up my/removed/dependency`
<issue_comment>username_1: Based on my discussions with @username_0, this would be distinct from `glide up` in that it would not update any of the pinned versions for existing non-removed dependencies.
<issue_comment>username_0: ah, right. I forgot that, but it's a very important clarification. :+1:
<issue_comment>username_1: I think this is all done now.<issue_closed>
<issue_comment>username_0: thanks @username_1 |
<issue_start><issue_comment>Title: Handling exception when SSH is not enabled on Cisco devices.
username_0: My code:
try:
net_connect = ConnectHandler(**cisco_1)
except SSHException:
print('SSH is not enabled for this device.')
sys.exit()
When a Cisco device does not have SSH enabled, a `SSHException` exception will be thrown by `Paramiko` as shown in the stacktrace below.
Exception: Error reading SSH protocol banner
Traceback (most recent call last):
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1867, in _check_banner
buf = self.packetizer.readline(timeout)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\packet.py", line 327, in readline
buf += self._read_timeout(timeout)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\packet.py", line 483, in _read_timeout
raise EOFError()
EOFError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1723, in run
self._check_banner()
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1871, in _check_banner
raise SSHException('Error reading SSH protocol banner' + str(e))
paramiko.ssh_exception.SSHException: Error reading SSH protocol banner
Traceback (most recent call last):
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1867, in _check_banner
buf = self.packetizer.readline(timeout)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\packet.py", line 327, in readline
buf += self._read_timeout(timeout)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\packet.py", line 483, in _read_timeout
raise EOFError()
EOFError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\UserAdm\PycharmProjects\CiscoPL2\dirconnect.py", line 18, in <module>
net_connect = ConnectHandler(**cisco_1)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\netmiko\ssh_dispatcher.py", line 88, in ConnectHandler
return ConnectionClass(*args, **kwargs)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\netmiko\base_connection.py", line 68, in __init__
self.establish_connection(verbose=verbose, use_keys=use_keys, key_file=key_file)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\netmiko\base_connection.py", line 165, in establish_connection
self.remote_conn_pre.connect(**ssh_connect_params)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\client.py", line 338, in connect
t.start_client()
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 493, in start_client
raise e
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1723, in run
self._check_banner()
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1871, in _check_banner
raise SSHException('Error reading SSH protocol banner' + str(e))
paramiko.ssh_exception.SSHException: Error reading SSH protocol banner
I tried catching the `SSHException`, but it will not catch all the exceptions, which I think is not the expected behavior ? Stacktrace below.
Exception: Error reading SSH protocol banner
Traceback (most recent call last):
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1867, in _check_banner
buf = self.packetizer.readline(timeout)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\packet.py", line 327, in readline
buf += self._read_timeout(timeout)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\packet.py", line 483, in _read_timeout
raise EOFError()
EOFError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1723, in run
self._check_banner()
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1871, in _check_banner
raise SSHException('Error reading SSH protocol banner' + str(e))
paramiko.ssh_exception.SSHException: Error reading SSH protocol banner
SSH is not enabled for this device.
<issue_comment>username_0: @username_1 , I tried catching the `EOFError` exception as well, but the end result is the same as if I only caught the `SSHException` exception.
try:
net_connect = ConnectHandler(**cisco_1)
except (EOFEerror, SSHException):
print('SSH is not enabled for this device.')
sys.exit()
<issue_comment>username_1: You misspelled EOFError in your except statement.
>>>
except (EOFEerror, SSHException): # One too many 'Ee' >>>
Kirk
<issue_comment>username_0: Oops that was a typo on my part, my actual code has the correct spelling.
try:
net_connect = ConnectHandler(**cisco_1)
except (EOFError, SSHException):
print('SSH is not enabled for this device.')
sys.exit()
<issue_comment>username_1: @username_0 Can you show me the stack trace you get when using the above code?
<issue_comment>username_0: Exception: Error reading SSH protocol banner
Traceback (most recent call last):
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1867, in _check_banner
buf = self.packetizer.readline(timeout)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\packet.py", line 327, in readline
buf += self._read_timeout(timeout)
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\packet.py", line 483, in _read_timeout
raise EOFError()
EOFError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1723, in run
self._check_banner()
File "C:\Users\UserAdm\AppData\Local\Programs\Python\Python35\lib\site-packages\paramiko\transport.py", line 1871, in _check_banner
raise SSHException('Error reading SSH protocol banner' + str(e))
paramiko.ssh_exception.SSHException: Error reading SSH protocol banner
SSH is not enabled for this device.
<issue_comment>username_0: @username_1 , Apologies, are there any updates on this ? Cheers.
<issue_comment>username_1: @username_0 Here is what I see to SSH to an IP address that is not listening on SSH.
```
Password:
Traceback (most recent call last):
File "./test_cisco.py", line 22, in <module>
net_connect = ConnectHandler(**device)
File "build/bdist.linux-i686/egg/netmiko/ssh_dispatcher.py", line 84, in ConnectHandler
File "build/bdist.linux-i686/egg/netmiko/base_connection.py", line 68, in __init__
File "build/bdist.linux-i686/egg/netmiko/base_connection.py", line 181, in establish_connection
netmiko.ssh_exception.NetMikoTimeoutException: Connection to device timed-out: cisco_ios 50.76.53.27:22
```
Here is how I handled the exception:
```
try:
net_connect = ConnectHandler(**device)
except NetMikoTimeoutException:
sys.exit("Handled timeout exception")
```
What happens when you:
```
telnet router_ip 22
```
What is the output you get back?
This is probably a bug in your code (i.e. not handling the exception correctly).
You will also need to post your full code stripped of any password/IP.
Kirk
<issue_comment>username_0: @username_1
This is my exact code which I use for testing:
#! python3
from netmiko import ConnectHandler
from netmiko.ssh_exception import NetMikoTimeoutException
def sshconn(user_ip, user_pwd, en_pwd, usrn):
cisco_1 = {'device_type': 'cisco_ios', 'ip': user_ip, 'password': user_pwd, 'secret': en_pwd, 'username': usrn}
net_connect = ConnectHandler(**cisco_1)
try:
sshconn('192.168.1.1','cisco2','cisco2','Admin')
except NetMikoTimeoutException:
print('SSH not enabled.')
I did actually get the expected behavior as described (handled successfully by NetmikoTimeoutException), but it is not consistent. Out of the 10 times I tried running the same code, I got the expected behavior 4/10 times. The other 6 times resulted in the error I posted in https://github.com/username_1/netmiko/issues/228#issuecomment-228330622
<issue_comment>username_1: @username_0 This maybe should be in Netmiko, but you should be able to work around it pretty easily.
Something like the following:
```
from paramiko.ssh_exception import SSHException
try:
net_connect = ConnectHandler(**cisco_1)
except (EOFError, SSHException, NetMikoTimeoutException):
print('SSH is not enabled for this device.')
sys.exit()
```
If your code doesn't actually catch the exception, then there is very very likely something wrong with your exception handling code.
You could also do this, but it is not a very good practice in general.
```
try:
net_connect = ConnectHandler(**cisco_1)
except Exception:
print('SSH is not enabled for this device.')
sys.exit()
```
Kirk
<issue_comment>username_0: @username_1 , I tried `except (EOFError, SSHException, NetMikoTimeoutException):` and `except:` , but both strangely results in the same set of error messages https://github.com/username_1/netmiko/issues/228#issuecomment-228330622.
Could it be due to the cisco device configuration ? I simply did a `transport input telnet` on `vty 0 15` to simulate a restriction of ssh connections.
<issue_comment>username_1: @username_0 No it is not a Cisco issue (at least as far as why you are unable to handle the exception). There could be a Cisco issue on why you can't login.
Can you post your entire code when using the bare 'except:' statement? You can remove passwords, but don't change any line numbers.
Also post the current exception you get while running that code.
<issue_comment>username_0: @username_1
[This](http://pastebin.com/XLbqqSw2) is the list of exceptions I get under different circumstances as described in the comments in the code. The python script was executed in command prompt.
My entire code for the above test:
#! python3
import sys
from netmiko import ConnectHandler
from netmiko.ssh_exception import NetMikoTimeoutException
from paramiko.ssh_exception import SSHException
try:
cisco_1 = {'device_type': 'cisco_ios', 'ip': '192.168.1.1', 'password': 'ciscox', 'secret': 'ciscox', 'username': 'Admin'}
net_connect = ConnectHandler(**cisco_1)
except:
print('ssh not enabled.')
# def sshconnection(user_ip, user_pwd, en_pwd, usrn):
# cisco_1 = {'device_type': 'cisco_ios', 'ip': user_ip, 'password': user_pwd, 'secret': en_pwd, 'username': usrn}
#
# net_connect = ConnectHandler(**cisco_1)
# try:
# sshconnection('192.168.1.1', 'ciscox', 'ciscox', 'Admin')
# except (EOFError, SSHException, NetMikoTimeoutException):
# except Exception:
# print('SSH not enabled.')
# sys.exit()
Really appreciate the replies so far, thanks for the effort :)
<issue_comment>username_1: '2.0.1'
```
Kirk
<issue_comment>username_0: @username_1 The referenced issue is strikingly similar to what I have faced so far.
What I am trying to accomplish is to gracefully handle the scenario when SSH isn't enabled for the Cisco device. However, it seems like from what was said in the referenced issue, there doesn't seem to be a good way to gracefully handle the scenario without printing any tracebacks.
I guess I will try to work around this by suppressing the output stream and see if it works. At least the tracebacks will be hidden from the user.
I am using Paramiko 2.0.0. I will update it and report my results :)
<issue_comment>username_1: @username_0 Yes, I think we can handle the exception, but perhaps not suppress the exception message. I assume the exception message is probably on standard error and not on standard output. Have you tried just redirecting standard error to a file?
<issue_comment>username_2: Could it be a logger output? What would be the output if you add the following before your code:
```
import logging
logging.basicConfig(format='%(name)s %(filename)s %(lineno)s %(levelname)s %(message)s')
```
<issue_comment>username_1: I don't think there is any action to take here on Netmiko...so I am going to close this.<issue_closed>
<issue_comment>username_3: @username_0
if you want to eliminate the Traceback out put. just set the **tracebacklimit** to 0
import sys
sys.tracebacklimit = 0
Normal Output
---------------------------------------------------------------------------------------------------------
Connecting to device" 192.168.200.201
Exception: Error reading SSH protocol banner
Traceback (most recent call last):
File "/home/dev/devnet/lib/python3.8/site-packages/paramiko/transport.py", line 2211, in _check_banner
buf = self.packetizer.readline(timeout)
File "/home/dev/devnet/lib/python3.8/site-packages/paramiko/packet.py", line 380, in readline
buf += self._read_timeout(timeout)
File "/home/dev/devnet/lib/python3.8/site-packages/paramiko/packet.py", line 609, in _read_timeout
raise EOFError()
EOFError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dev/devnet/lib/python3.8/site-packages/paramiko/transport.py", line 2039, in run
self._check_banner()
File "/home/dev/devnet/lib/python3.8/site-packages/paramiko/transport.py", line 2215, in _check_banner
raise SSHException(
paramiko.ssh_exception.SSHException: Error reading SSH protocol banner
SSH Issue. Are you sure SSH is enabled? 192.168.200.201
Connecting to device" 192.168.200.202
configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vios2(config)#ntp server 10.10.10.10
vios2(config)#ntp server 10.10.100.110
vios2(config)#end
vios2#
**after the tracebacklimit to 0**
-----------------------------------------------------------------------
Connecting to device" 192.168.200.201
Exception: Error reading SSH protocol banner
Traceback (most recent call last):
EOFError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
paramiko.ssh_exception.SSHException: Error reading SSH protocol banner
SSH Issue. Are you sure SSH is enabled? 192.168.200.201
Connecting to device" 192.168.200.202
configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
vios2(config)#ntp server 10.10.10.10
vios2(config)#ntp server 10.10.100.110
vios2(config)#end
vios2# |
<issue_start><issue_comment>Title: page unload should checkpoint
username_0: this is a regression. we used to checkpoint on page unload (eg close the tab or reload).
<issue_comment>username_0: reverted with https://github.com/twosigma/beaker-notebook/commit/b93e7a61f257b91c5c6dcf36c11041870f9bee12
but strangely fixing the typo window.unload to window.onunload does not fix.
<issue_comment>username_0: http://stackoverflow.com/questions/4945932/window-onbeforeunload-ajax-request-problem-with-chrome
<issue_comment>username_0: both reload and close work back at 2b5df57 on July 24<issue_closed> |
<issue_start><issue_comment>Title: Private mode in safari disables write to localStorage
username_0: If the user is in private browsing mode in safari, you can still write to the localStorage, but you doesnt get the data back.
The supported-test should not only check if you can write to localStorage, but if you get the written result back.
<issue_comment>username_0: maybe we try to fallback to sessionStorage if localStorage is not available and then fallback to a simple inmemory object-storage.<issue_closed> |
<issue_start><issue_comment>Title: Handle baseline options + filtering using forms
username_0: Based off of the work in #137. More prerequisite work for adding chart filters to baseline terms.
Code diff until #137 is merged: https://github.com/rapidpro/tracpro/compare/reusable-filters...baseline-filter-form
<issue_comment>username_0: This could use some more work; closing for now. |
<issue_start><issue_comment>Title: fix(compiler): avoid evaluating arguments to unknown decorators
username_0: Fixes #13605
**What kind of change does this PR introduce?** (check one with "x")
```
[x] Bugfix
```
**What is the current behavior?** (You can also link to an open issue here)
The `ngc` compiler will emit an error if any decorator in a component does not conform to the AOT rules.
**What is the new behavior?**
The `ngc` compiler will only emit an error if a known Angular decorator does not conform to the AOT rules.
**Does this PR introduce a breaking change?** (check one with "x")
```
[ ] Yes
[x] No
```
<issue_comment>username_1: Landed as https://github.com/angular/angular/commit/d061adc02db9057d67de2f74b6e880005582e674 |
<issue_start><issue_comment>Title: This webpage is not available ERR_CONNECTION_TIMED_OUT
username_0: So I've been having this issue for a while now. I don't know how but sometimes after a while it would fix itself. At first I thought it was my host but my host is fine. Basically I called it rpgcms.dev and I'm pointing to the default IP.
If I ssh into the box and curl localhost, it works. If I curl from outside the box, I get a timed out error.
I'm on Windows running latest scotch box. I searched pretty much everywhere and everyone suggested the following:
- vagrant provision
- vagrant reload
- double check the host file
- empty your browser cache and try again
- destroy and vagrant up again
I tried everything. I have no clue what's going on. It was working fine but I restarted my PC and now just can't connect.
<issue_comment>username_1: Hey Patrick. What you mean cURL outside the box?
<issue_comment>username_2: I have the exact problem, no matter what I do, even changing the IP in the Vagrantfile, I keep getting "The connection has timed out The server at 192.168.33.10 is taking too long to respond."
<issue_comment>username_1: Hey guys - this makes total sense if we think this through. I understand what you're asking now.
On your **laptop**, you made a change to your host file. This change is saying: "Yo Laptop, all network requests to `rpgcms.dev` need to go to IP address `192.168.33.10`. So from your **laptop** if you visit the URL `rpgcms.dev` it will go to `192.168.33.10`.
Okay, cake.
But, if you're **from the server** and visit `rpgcms.dev` it's going to respond with "Yo WTF is that it's not a valid URL. The DNS lookup for this is failing". This is because the **server** / Virtual Machine (your Scotch Box) doesn't inherit the custom host files from your **laptop**.
For example, if you did cURL with the IP `192.168.33.10`, it would work.
If you get where I'm going. The easiest fix is to update your VM's host file as well:
```
vagrant ssh
vim /etc/hosts
# Add this to the bottom
192.168.33.10 rpgcms.dev
```
Then rerun the command and it will work.<issue_closed>
<issue_comment>username_2: I didn't change the hosts neither on my local machine nor on the virtual machine, I can't access the IP directly `192.168.33.10`.
I tried to add `192.168.33.10 whatever.dev` to the bottom for both hosts files and still the same problem.
<issue_comment>username_1: Okay - you have a different issue than @username_0.
Open VirtualBox. Is a VM running? Do you have more than one?
<issue_comment>username_2: @username_1 Yes the VM is running, and it's the only one.
@username_0 I added it to Vagrantfile and provisioned it, but still.
```
==> default: Forwarding ports...
default: 80 => 8080 (adapter 1)
default: 22 => 2222 (adapter 1)
```
<issue_comment>username_2: [The Problem](http://i.imgur.com/S73tvOS.png)
<issue_comment>username_3: I had this issue as well with Windows 10, Virtualbox 5.12, and Vagrant 1.8.1, very frustrating. It's not a scotchbox issue though, it's Virtualbox. I downgraded to Virtualbox 5.10 and everything works fine now with the exact same config files. I upgraded to Virtualbox 5.12, and networking was broken again.
The point being - if you can't ping the box IP from the command line, it's probably a Virtualbox networking issue. The issue was further confused by the fact that vagrant ssh worked fine, and `ifconfig` showed the proper IP address. If that is the case for anyone visiting this thread - try downgrading to a lower Virtualbox version temporarily to see if that easily fixes your issue.
<issue_comment>username_1: Okay - can you downgrade to Vagrant 1.7x? I haven't had a chance to investigate but know VirtualBox 5.0 and it work
<issue_comment>username_3: Hi @username_1, now I have no idea what was causing the previous inaccessible problem. I downgraded to Vagrant 1.7.4 with VirtualBox 5.12, and everything worked fine. I was able to ping, and access via web browser, the local scotchbox.
Then I upgraded to Vagrant 1.8.1 with VirtualBox 5.12, and now the scotchbox is perfectly accessible, even though this exact configuration didn't work before.
<issue_comment>username_4: I wonder if this is the same thing that is happening at issue #169 ?
<issue_comment>username_5: @username_0 thank you so much for the `config.vm.network "forwarded_port", guest: 80, host: 8080` suggestion! That, in addition to including `192.168.33.10 lazy.dev` in my `hosts` file on my Windows host finally fixed it for me (after many hours struggling).
<issue_comment>username_6: @username_5 @whatnickcodes @username_0
OK im having the same problem, and unfortunately I'm pretty new to VM/ScotchBox stuff, so I apologize in advance for anything stupid I'm doing. Right now when I try and access the project via: http://192.168.33.10/ I get the timed out error like everyone else.
This is what my windows host file looks like. I just followed the instructions found here:
https://support.rackspace.com/how-to/modify-your-hosts-file/
So I hope I did it correctly. What you see in the image I just copied from an entry already in there and added the IP: http://192.168.33.10/

This is what my Vagrantfile currently looks like, after adding in:
`config.vm.network "forwarded_port", guest: 80, host: 8080`

Does anyone have any idea what might still be wrong? Could this be a problem with my virtual box?
Thanks in advance for any help you can offer, I greatly appreciate it!
<issue_comment>username_5: @username_6 also new to VM/Scotch Box myself, I changed my `synced_folder` line to read:
`config.vm.synced_folder "./public", "/var/www", :mount_options => ["dmode=777", "fmode=666"]`
I think that was a fix for a different issue (where the `Index Of /` page was showing up instead of my wordpress content). Also did you try `vagrant halt` and then `vagrant up` again after you made that addition to your Vagrantfile?
Other than that not sure I can be of much help.
<issue_comment>username_7: Similar issue for me here. same "connection time out". tried @username_5 solution but doesn't seem to do much. Running on el captan. I hope there will be a solution soon
<issue_comment>username_8: I'm running into this same issue on Ubuntu 17.04. Vagrant Up throws no errors but I can never access 192.168.33.10. Just loads for forever in the browser.
<issue_comment>username_9: This is my novice solution. If there is a better way, I'm all ears :-)
I was having the same issue and what I had to do was modify my Vagrantfile:
following this answer: [http://stackoverflow.com/a/43213075/3187749](http://stackoverflow.com/a/43213075/3187749)
```
config.vm.network :forwarded_port, guest: 80, host: 4567, host_ip: "127.0.0.1", auto_correct: true
```
then i just ran `vagrant reload` and everything seemed to be good for me on `http://127.0.0.1:4567/`
<issue_comment>username_10: @username_9 just used your solution.
That did it for me, thank you! |
<issue_start><issue_comment>Title: Improve Dockerfile.run image
username_0: This dockerfile generates a more lightweight image that works with the
current official dynamically generated binaries
<issue_comment>username_0: @aanand version is hardcoded in the URL, you'd probably want to change it so binary is taken from python builds. I'm not familiarized with compose build process but I guess it should be easy to change
<issue_comment>username_1: @username_2: What the status on this?
<issue_comment>username_2: The PR needs to be updated to copy the local file, not download a file that wont exist until after the release is published.
<issue_comment>username_0: @username_2 can you please let me know exactly what needs to be changed so I update the PR?. I'm not familiarized with compose build process so IDK exactly what needs to be changed.
<issue_comment>username_2: I did in: https://github.com/docker/compose/pull/3856#discussion_r79014587
* in `script/build/image` copy the binary out of `dist/` (which is ignored by `.dockerignore`) and move it to some other path, so it can be added with `COPY`
* remove the `curl` replace it with a `COPY`
Possibly also cleanup the temp file that was copied in the first step
<issue_comment>username_0: @username_2 thx for the clarification. I don't see that .dockerignore is actually ignoring the `dist` directory. Is there something I'm not seeing?
https://github.com/docker/compose/blob/master/.dockerignore
<issue_comment>username_0: @username_2 taking a closer look and AFAICS seems like `dist/` doesn't have the compose binary. If I run the `image` script, it just bundles the compose source in a tar.gz and just calls `docker build` with the Dockerfile.run dockerfile.
For what I understand, the binary needs to the built using the standard Dockerfile and then just send it to the Dockerfile.run target so it just can be copied there. This is what I meant when I said before that I don't understand compose build process. The previous Dockerfile.run actually built the binary using python, but in this case to keep the image small we just need to copy it inside there.
As there's no Makefile or anything, it's difficult for me to understand the actual build process.
<issue_comment>username_2: Oh apparently I removed it when I set this up: https://github.com/docker/compose/commit/39cea970b8d161ce6986d5ad2f14b63cb3ff3094#diff-f7c5b4068637e2def526f9bbc7200c4e
The binary is built by `script/build/linux` you should be able to call that from `script/build/image`
<issue_comment>username_3: Please sign your commits following these rules:
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work
The easiest way to do this is to amend the last commit:
~~~console
$ git clone -b "master" [email protected]:username_0/compose.git somewhere
$ cd somewhere
$ git rebase -i HEAD~2
editor opens
change each 'pick' to 'edit'
save the file and quit
$ git commit --amend -s --no-edit
$ git rebase --continue # and repeat the amend for each commit
$ git push -f
~~~
Amending updates the existing PR. You **DO NOT** need to open a new one.
<issue_comment>username_3: Please sign your commits following these rules:
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work
The easiest way to do this is to amend the last commit:
~~~console
$ git clone -b "master" [email protected]:username_0/compose.git somewhere
$ cd somewhere
$ git rebase -i HEAD~2
editor opens
change each 'pick' to 'edit'
save the file and quit
$ git commit --amend -s --no-edit
$ git rebase --continue # and repeat the amend for each commit
$ git push -f
~~~
Amending updates the existing PR. You **DO NOT** need to open a new one.
<issue_comment>username_3: Please sign your commits following these rules:
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work
The easiest way to do this is to amend the last commit:
~~~console
$ git clone -b "master" [email protected]:username_0/compose.git somewhere
$ cd somewhere
$ git rebase -i HEAD~2
editor opens
change each 'pick' to 'edit'
save the file and quit
$ git commit --amend -s --no-edit
$ git rebase --continue # and repeat the amend for each commit
$ git push -f
~~~
Amending updates the existing PR. You **DO NOT** need to open a new one.
<issue_comment>username_3: Please sign your commits following these rules:
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work
The easiest way to do this is to amend the last commit:
~~~console
$ git clone -b "master" [email protected]:username_0/compose.git somewhere
$ cd somewhere
$ git rebase -i HEAD~2
editor opens
change each 'pick' to 'edit'
save the file and quit
$ git commit --amend -s --no-edit
$ git rebase --continue # and repeat the amend for each commit
$ git push -f
~~~
Amending updates the existing PR. You **DO NOT** need to open a new one.
<issue_comment>username_0: @username_2 PR updated, I think we have a proper build pipeline now.
<issue_comment>username_0: @username_2 applied suggestions and squashed commits.
<issue_comment>username_0: @username_2 done. Hopefully we're good to go now!
<issue_comment>username_0: @username_2 @username_4 ping
<issue_comment>username_4: @username_0 This is on my radar! Thank you for your patience.
<issue_comment>username_4: Rebased and merged in #4571
Thank you!
<issue_comment>username_0: @username_4 :tada: WOHO! |
<issue_start><issue_comment>Title: NoUselessReturnFixer - Do not remove return if last statement in short if statement
username_0: Fixes https://github.com/FriendsOfPHP/PHP-CS-Fixer/issues/1830
bug @ 1.12
<issue_comment>username_1: :+1:
<issue_comment>username_1: Is this good to merge?
<issue_comment>username_0: (FYI I have/see no todo's on this one)
<issue_comment>username_1: :+1:
<issue_comment>username_0: AppVeyor is failing for no reason
<issue_comment>username_2: yep, good to merge
<issue_comment>username_2: Thank you @username_0.
<issue_comment>username_3: @username_2 Any chance to have a new 2.0.0 alpha tag?
This fixer is very nice but can't be used without any risk with the actual alpha.
Thanks.
<issue_comment>username_2: this is 1.12, not 2.x
<issue_comment>username_2: marging 1.x to 2.x is not a backport.
this is 1.12@dev, please wait for it's release first ;) |
<issue_start><issue_comment>Title: tab options plugin broken
username_0: http://5digits.org/pentadactyl/plugins#tab-options-plugin
I really appreciated this- it seems very logical to me that the previously active tab is made active again upon closing the current tab. Particularly with buffers. The default firefox behavior, select the tab to the side, is insane. At the time of writing, the tab options plugin doesn't do its job, and prevents firefox from being able to close the currently open tab.
Please fix, or integrate into pentadactyl.
<issue_comment>username_1: +1. This seems like a reasonable feature to have in core Pentadactyl.
<issue_comment>username_2: FWIW, if I change this line:
for (let val in values(options["tabclose"])) {
to:
for (let val of options["tabclose"]) {
then the plugin works just fine for me (though I may not be using all of its functionality). |
<issue_start><issue_comment>Title: Improve performance using tricks found in Leaflet
username_0: Openlayers may be able to benefit from a number of the rendering optimizations in Leaflet. See if anyone in OpenLayers has looked into these before taking this task on.
<issue_comment>username_1: Is it ok to close this issue?
<issue_comment>username_0: I believe @jpfiset pilfered what he could from Leaflet a while back.<issue_closed> |
<issue_start><issue_comment>Title: Make sure you can test view scoped beans
username_0: Add test cases to spring-vaadin-test that use view scoped beans.
<issue_comment>username_0: This will require some changes to the code, since the ViewScope currently relies on UI.getCurrent() returning a non-null value. Currently, UI.getCurrent() returns null when running tests.
<issue_comment>username_0: Either change the ViewScope so that it also works with a pluggable strategy pattern, or change the test classes so that UI.getCurrent() is populated properly.<issue_closed> |
<issue_start><issue_comment>Title: Uncaught TypeError: (intermediate value)(intermediate value)(intermediate value).elementFromPoint is not a function
username_0: [Enter steps to reproduce below:]
1. ...
2. ...
**Atom Version**: 1.13.0
**Electron Version**: 1.3.13
**System**: Mac OS X 10.12
**Thrown From**: [activate-power-mode](https://github.com/JoelBesada/activate-power-mode) package, v1.2.0
### Stack Trace
Uncaught TypeError: (intermediate value)(intermediate value)(intermediate value).elementFromPoint is not a function
```
At /Applications/Atom.app/Contents/Resources/app.asar/node_modules/text-buffer/lib/text-buffer.js:872
TypeError: (intermediate value)(intermediate value)(intermediate value).elementFromPoint is not a function
at Object.getColorAtPosition (/Users/fuxin/.atom/packages/activate-power-mode/lib/power-canvas.coffee:51:49)
at Object.spawnParticles (/Users/fuxin/.atom/packages/activate-power-mode/lib/power-canvas.coffee:43:14)
at invokeFunc (/Users/fuxin/.atom/packages/activate-power-mode/node_modules/lodash.throttle/index.js:149:19)
at leadingEdge (/Users/fuxin/.atom/packages/activate-power-mode/node_modules/lodash.throttle/index.js:159:22)
at Cursor.debounced [as throttleSpawnParticles] (/Users/fuxin/.atom/packages/activate-power-mode/node_modules/lodash.throttle/index.js:224:16)
at Object.onChange (/Users/fuxin/.atom/packages/activate-power-mode/lib/power-editor.coffee:57:14)
at Function.module.exports.Emitter.simpleDispatch (/Applications/Atom.app/Contents/Resources/app.asar/node_modules/event-kit/lib/emitter.js:25:14)
at Emitter.module.exports.Emitter.emit (/Applications/Atom.app/Contents/Resources/app.asar/node_modules/event-kit/lib/emitter.js:129:28)
at TextBuffer.module.exports.TextBuffer.emitDidChangeEvent (/Applications/Atom.app/Contents/Resources/app.asar/node_modules/text-buffer/lib/text-buffer.js:728:20)
at TextBuffer.module.exports.TextBuffer.applyChange (/Applications/Atom.app/Contents/Resources/app.asar/node_modules/text-buffer/lib/text-buffer.js:713:19)
at TextBuffer.module.exports.TextBuffer.setTextInRange (/Applications/Atom.app/Contents/Resources/app.asar/node_modules/text-buffer/lib/text-buffer.js:622:12)
at Selection.module.exports.Selection.insertText (/Applications/Atom.app/Contents/Resources/app.asar/src/selection.js:480:43)
at /Applications/Atom.app/Contents/Resources/app.asar/src/text-editor.js:1110:29
at /Applications/Atom.app/Contents/Resources/app.asar/src/text-editor.js:1149:28
at TextBuffer.module.exports.TextBuffer.transact (/Applications/Atom.app/Contents/Resources/app.asar/node_modules/text-buffer/lib/text-buffer.js:867:18)
at TextEditor.module.exports.TextEditor.transact (/Applications/Atom.app/Contents/Resources/app.asar/src/text-editor.js:1566:26)
at /Applications/Atom.app/Contents/Resources/app.asar/src/text-editor.js:1143:24
at TextEditor.module.exports.TextEditor.mergeSelections (/Applications/Atom.app/Contents/Resources/app.asar/src/text-editor.js:2540:43)
at TextEditor.module.exports.TextEditor.mergeIntersectingSelections (/Applications/Atom.app/Contents/Resources/app.asar/src/text-editor.js:2506:35)
at TextEditor.module.exports.TextEditor.mutateSelectedText (/Applications/Atom.app/Contents/Resources/app.asar/src/text-editor.js:1141:19)
at TextEditor.module.exports.TextEditor.insertText (/Applications/Atom.app/Contents/Resources/app.asar/src/text-editor.js:1107:19)
at TextEditor.object.(anonymous function) (/Users/fuxin/.atom/packages/language-babel/lib/insert-nl-jsx.coffee:31:18)
at TextEditor.object.(anonymous function) [as insertText] (/Applications/Atom.app/Contents/Resources/app.asar/node_modules/underscore-plus/lib/underscore-plus.js:77:27)
at TextEditorComponent.module.exports.TextEditorComponent.onTextInput (/Applications/Atom.app/Contents/Resources/app.asar/src/text-editor-component.js:478:26)
at HTMLDivElement.<anonymous> (/Applications/Atom.app/Contents/Resources/app.asar/src/text-editor-component.js:3:59)
```
### Commands
```
-7:48.1.0 activate-power-mode:toggle (atom-notification.fatal.icon.icon-bug.native-key-bindings.has-detail.has-close.has-stack)
2x -7:45.5.0 core:backspace (input.hidden-input)
-7:35.3.0 core:copy (atom-notification.fatal.icon.icon-bug.native-key-bindings.has-detail.has-close.has-stack)
2x -5:49.5.0 core:paste (atom-notification.fatal.icon.icon-bug.native-key-bindings.has-detail.has-close.has-stack)
-2:23.9.0 core:copy (atom-notification.fatal.icon.icon-bug.native-key-bindings.has-detail.has-close.has-stack)
-1:38.5.0 settings-view:open (atom-notification.fatal.icon.icon-bug.native-key-bindings.has-detail.has-close.has-stack)
-1:35.2.0 core:cancel (div.package-detail.panels-item)
-1:10.8.0 core:close (input#activate-power-mode.screenShake.enabled.)
-0:54.5.0 activate-power-mode:toggle (input.hidden-input)
-0:47.3.0 editor:consolidate-selections (input.hidden-input)
-0:47.3.0 core:cancel (input.hidden-input)
```
### Config
```json
{
"core": {
"disabledPackages": [
"linter-flake8",
"autocomplete-python"
],
"telemetryConsent": "no",
"themes": [
"seti-ui",
"atom-monokai"
]
},
"activate-power-mode": {
"screenShake": {}
}
[Truncated]
language-objective-c, v0.15.1 (active)
language-perl, v0.37.0 (active)
language-php, v0.37.3 (active)
language-property-list, v0.8.0 (active)
language-python, v0.45.1 (active)
language-ruby, v0.70.2 (active)
language-ruby-on-rails, v0.25.1 (active)
language-sass, v0.57.0 (active)
language-shellscript, v0.23.0 (active)
language-source, v0.9.0 (active)
language-sql, v0.25.0 (active)
language-text, v0.7.1 (active)
language-todo, v0.29.1 (active)
language-toml, v0.18.1 (active)
language-xml, v0.34.12 (active)
language-yaml, v0.27.1 (active)
# Dev
No dev packages
```
<issue_comment>username_1: how you reproduce this?
<issue_comment>username_0: e ....now It can be used again... i don`t know what happend... may be the reason for the operating system?<issue_closed> |
<issue_start><issue_comment>Title: Update to 1.3.3.1 ...
username_0: Any help?
$ meteor
[[[[[ ~\E\000nodejs\app112 ]]]]]
=> Started proxy.
=> Started MongoDB.
=> Errors prevented startup:
While processing files with ecmascript (for target web.browser):
module.js:338:15: Cannot find module 'babel-helper-function-name'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.
(C:\Users\Workstation1\AppData\Local.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\loose.js:17:56)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\Workstation1\AppData\Local.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.
(C:\Users\Workstation1\AppData\Local.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\index.js:56:38)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\Workstation1\AppData\Local.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.
(C:\Users\Workstation1\AppData\Local.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\index.js:12:6)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\Workstation1\AppData\Local.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.getDefaults
(C:\Users\Workstation1\AppData\Local.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\options.js:30:15)
at Object.getDefaultOptions (packages/babel-compiler/babel.js:9:1)
at BabelCompiler.BCp.processOneFileForTarget
(packages/babel-compiler/babel-compiler.js:95:1)
at BabelCompiler.
(packages/babel-compiler/babel-compiler.js:21:1)
at Array.forEach (native)
at BabelCompiler.BCp.processFilesForTarget
(packages/babel-compiler/babel-compiler.js:20:1)
While processing files with ecmascript (for target os.windows.x86_32):
module.js:338:15: Cannot find module 'babel-helper-function-name'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.
(C:\Users\Workstation1\AppData\Local.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\loose.js:17:56)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\Workstation1\AppData\Local.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.
(C:\Users\Workstation1\AppData\Local.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\index.js:56:38)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\Workstation1\AppData\Local.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
[Truncated]
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\Workstation1\AppData\Local.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.getDefaults
(C:\Users\Workstation1\AppData\Local.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\options.js:30:15)
at Object.getDefaultOptions (packages/babel-compiler/babel.js:9:1)
at BabelCompiler.BCp.processOneFileForTarget
(packages/babel-compiler/babel-compiler.js:95:1)
at BabelCompiler.
(packages/babel-compiler/babel-compiler.js:21:1)
at Array.forEach (native)
at BabelCompiler.BCp.processFilesForTarget
(packages/babel-compiler/babel-compiler.js:20:1)
=> Your application has errors. Waiting for file change.
<issue_comment>username_1: me too ! on windows 10 x64.
<issue_comment>username_2: Same on mac os x. Compiled fine with 1.3.3
[[[[[ ~/nfc-meteor ]]]]]
=> Started proxy.
=> Started MongoDB.
=> Errors prevented startup:
While processing files with ecmascript (for target web.browser):
/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/lib/compiler.js:102:23:
Object [object Object] has no method 'BlockStatement'
at Context.types.PathVisitor.fromMethodsObject.getBlockBodyPath
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/lib/compiler.js:102:23)
at Context.types.PathVisitor.fromMethodsObject.hoistImports
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/lib/compiler.js:147:25)
at Context.types.PathVisitor.fromMethodsObject.visitImportDeclaration
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/lib/compiler.js:229:10)
at Context.invokeVisitorMethod
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:342:43)
at Visitor.PVp.visitWithoutReset
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:194:28)
at NodePath.each
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path.js:99:22)
at visitChildren
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:217:14)
at Visitor.PVp.visitWithoutReset
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:202:16)
at visitChildren
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:244:21)
at Visitor.PVp.visitWithoutReset
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:202:16)
at NodePath.each
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path.js:99:22)
at visitChildren
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:217:14)
at Visitor.PVp.visitWithoutReset
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:202:16)
at visitChildren
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:244:21)
at Visitor.PVp.visitWithoutReset
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:202:16)
at NodePath.each
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path.js:99:22)
at visitChildren
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:217:14)
at Visitor.PVp.visitWithoutReset
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:202:16)
at visitChildren
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:244:21)
at Visitor.PVp.visitWithoutReset
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:202:16)
at Visitor.PVp.visit
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/node_modules/ast-types/lib/path-visitor.js:131:25)
at Object.compile
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/node_modules/reify/lib/compiler.js:23:23)
at /Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/index.js:61:33
at Cache.Cp.get
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/cache.js:94:19)
at Object.compile
(/Users/NFC/.meteor/packages/ecmascript/.0.4.5.xf6k52++os+web.browser+web.cordova/plugin.compile-ecmascript.os/npm/node_modules/meteor/babel-compiler/node_modules/meteor-babel/index.js:47:23)
at Object.Babel.compile (packages/babel-compiler/babel.js:26:1)
at packages/babel-compiler/babel-compiler.js:109:1
at Function.time (/tools/tool-env/profile.js:305:10)
at profile (packages/babel-compiler/babel-compiler.js:139:1)
at BabelCompiler.BCp.processOneFileForTarget (packages/babel-compiler/babel-compiler.js:108:1)
at BabelCompiler.<anonymous> (packages/babel-compiler/babel-compiler.js:21:1)
at Array.forEach (native)
at BabelCompiler.BCp.processFilesForTarget (packages/babel-compiler/babel-compiler.js:20:1)
=> Your application has errors. Waiting for file change.
<issue_comment>username_3: Same, 1.3.3.1 is still broken on Windows 10 x64.
<issue_comment>username_4: How did you manage to update to 1.3.3.1 on windows. It takes eternity for me, still downloading :(
<issue_comment>username_5: Started MongoDB.
Errors prevented startup:
While processing files with ecmascript (for target web.browser):
module.js:338:15: Cannot find module 'babel-helper-function-name'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (C:\Users\Brandon\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\loose.js:17:56)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load (C:\Users\Brandon\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (C:\Users\Brandon\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\index.js:56:38)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load (C:\Users\Brandon\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (C:\Users\Brandon\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\index.js:12:6)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load (C:\Users\Brandon\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.getDefaults (C:\Users\Brandon\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\options.js:30:15)
at Object.getDefaultOptions (packages/babel-compiler/babel.js:9:1)
at BabelCompiler.BCp.processOneFileForTarget (packages/babel-compiler/babel-compiler.js:95:1)
at BabelCompiler.<anonymous> (packages/babel-compiler/babel-compiler.js:21:1)
at Array.forEach (native)
at BabelCompiler.BCp.processFilesForTarget (packages/babel-compiler/babel-compiler.js:20:1)
While processing files with ecmascript (for target os.windows.x86_32):
module.js:338:15: Cannot find module 'babel-helper-function-name'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (C:\Users\Brandon\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\loose.js:17:56)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load (C:\Users\Brandon\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (C:\Users\Brandon\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\index.js:56:38)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load (C:\Users\Brandon\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (C:\Users\Brandon\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\index.js:12:6)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load (C:\Users\Brandon\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.getDefaults (C:\Users\Brandon\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\options.js:30:15)
at Object.getDefaultOptions (packages/babel-compiler/babel.js:9:1)
at BabelCompiler.BCp.processOneFileForTarget (packages/babel-compiler/babel-compiler.js:95:1)
at BabelCompiler.<anonymous> (packages/babel-compiler/babel-compiler.js:21:1)
at Array.forEach (native)
at BabelCompiler.BCp.processFilesForTarget (packages/babel-compiler/babel-compiler.js:20:1)
Your application has errors. Waiting for file change.
<issue_comment>username_6: Getting the same error... Nuisance really, looks like I'll have to install linux.
<issue_comment>username_7: Looks like a problem across platforms. Can you try to reproduce in a default app? My app with 1.3.3.1 is running fine on OS X El Capitan.
<issue_comment>username_7: (My fault for the Windows label)
<issue_comment>username_7: As @username_2's problem is proved to be different, it looks like a Windows-specific problem.
<issue_comment>username_6: @username_7 Yes it is cause I have run it on cloud9 just fine
<issue_comment>username_8: What dependencies are you using? I have some dependecies from babel, but I'm running them fine.
<issue_comment>username_9: Am I correct in assuming I have the same problem? I just installed Meteor for the first time today and want to make sure I'm not just doing something stupid.
"C:\Program Files (x86)\JetBrains\WebStorm 2016.1.2\bin\runnerw.exe" C:\Users\jwslu\AppData\Local\.meteor\meteor.bat
[[[[[ C:\Users\jwslu\SnapStrat\meteor ]]]]]
=> Started proxy.
=> Started MongoDB.
=> Errors prevented startup:
While processing files with ecmascript (for target web.browser):
module.js:338:15: Cannot find module 'babel-helper-function-name'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous>
(C:\Users\jwslu\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\loose.js:17:56)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\jwslu\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous>
(C:\Users\jwslu\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\index.js:56:38)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\jwslu\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous>
(C:\Users\jwslu\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\index.js:12:6)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\jwslu\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.getDefaults
(C:\Users\jwslu\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\options.js:30:15)
at Object.getDefaultOptions (packages/babel-compiler/babel.js:9:1)
at BabelCompiler.BCp.processOneFileForTarget
(packages/babel-compiler/babel-compiler.js:95:1)
at BabelCompiler.<anonymous>
(packages/babel-compiler/babel-compiler.js:21:1)
at Array.forEach (native)
at BabelCompiler.BCp.processFilesForTarget
(packages/babel-compiler/babel-compiler.js:20:1)
While processing files with ecmascript (for target os.windows.x86_32):
module.js:338:15: Cannot find module 'babel-helper-function-name'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous>
(C:\Users\jwslu\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\loose.js:17:56)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\jwslu\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous>
(C:\Users\jwslu\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\node_modules\babel-preset-meteor\node_modules\babel-plugin-transform-es2015-classes\lib\index.js:56:38)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
[Truncated]
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Module.Mp.load
(C:\Users\jwslu\AppData\Local\.meteor\packages\meteor-tool\1.3.3_1\mt-os.windows.x86_32\dev_bundle\lib\node_modules\reify\node\runtime.js:16:23)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.getDefaults
(C:\Users\jwslu\AppData\Local\.meteor\packages\ecmascript\0.4.5\plugin.compile-ecmascript.os\npm\node_modules\meteor\babel-compiler\node_modules\meteor-babel\options.js:30:15)
at Object.getDefaultOptions (packages/babel-compiler/babel.js:9:1)
at BabelCompiler.BCp.processOneFileForTarget
(packages/babel-compiler/babel-compiler.js:95:1)
at BabelCompiler.<anonymous>
(packages/babel-compiler/babel-compiler.js:21:1)
at Array.forEach (native)
at BabelCompiler.BCp.processFilesForTarget
(packages/babel-compiler/babel-compiler.js:20:1)
=> Your application has errors. Waiting for file change.
<issue_comment>username_6: @username_9 yes that's the one we are getting
<issue_comment>username_5: Anyone have any ideas on how to solve it? It seems like a fully windows issue, on my linux partition it runs perfectly fine.
<issue_comment>username_10: After eternal uninstall on Win10 to solve installation problem, now this. Argh.
<issue_comment>username_6: @username_10 I understand your pain! I was planning on spending Fri - Sun building in meteor... Now I have to try that social life stuff :(
<issue_comment>username_10: @username_6 This confirms my love/hate relationship with Meteor.
<issue_comment>username_11: @username_4 yes, this is the third update this week for me; updating meteor tool gets stuck on Windows, I waited for hours for it to download, without success. Until I uninstalled Meteor completely and downloaded the Meteor installer again.
Just to get the Meteor Tool 1.3.3.1 with the babel-helper-function-name error too.
<issue_comment>username_6: @username_10 this was going to be my first time building in it to fill out the portfolio, I was so excited and now disappointed... I guess it's true what they say "if it seems too good to be true"
<issue_comment>username_10: @username_6 Meteor is awesome when it works.
<issue_comment>username_6: @username_10 it looks it! If I can get building with it, it looks like it cuts out a lot of BS
<issue_comment>username_5: Anyone have some info?
<issue_comment>username_12: Try looking what package is involved, remove it, and start meteor again. The missing package will be freshly downloaded. In my case, and possibly the people with the logs above, this fixed it for me (Windows 10 x64):
C:\Users\<username>\AppData\Local\.meteor\packages\ecmascript
<issue_comment>username_10: @username_12 Worked for me! Thanks!
<issue_comment>username_13: @username_12 Worked well! Thanks
<issue_comment>username_14: Did the trick! Thanks.
<issue_comment>username_15: @username_12 that worked for me! Thank you!
<issue_comment>username_16: @username_12 I confirm the fix, it works for me on Windows 10 x64. The problem should come from the Windows .exe installer, somethings should be broken when you install meteor via this executable.
<issue_comment>username_12: @username_16 Aye, I don't think the package itself is the problem either. My wild guess is, it's related to the (sometimes huge) length of file paths emerging.
<issue_comment>username_17: @username_12 I can confirm the fix too and it worked for me on Windows 10 64-bit
<issue_comment>username_18: @username_12 Worked well! Windows 10 x64.
<issue_comment>username_19: @username_12 Thank you!
<issue_comment>username_20: @username_12: thank you so much!!!! You saved my day after a weekend of struggling!
<issue_comment>username_21: @username_12 my thanks also!! - managed to get my project updated to the latest Meteor on Win 10 x64. Had to use 7-Zip to delete the ecmascript folder (in case anyone else has the path too long error).
<issue_comment>username_22: Thanks for providing a workaround that works, @username_12!
After digging into this problem and other similar path length problems, I think it's finally time to take the plunge and upgrade `meteor npm` to ~3.6.9 (currently 2.15.1). That upgrade most likely calls for a Meteor 1.3.4 release, rather than 1.3.3.2.
One way or another, this should be fixed in the next release.
<issue_comment>username_12: @username_22 Nice, looking forward to it. Thanks for the feedback & good work!
<issue_comment>username_22: Ok folks, thanks for your patience. Please try updating to the new release candidate:
```sh
meteor update --release 1.3.4-rc.0
```
The big difference is that Meteor 1.3.4 uses `npm@3` instead of `npm@2`, so problems with path length limits should mostly disappear (after you update successfully, at least).
<issue_comment>username_23: same issue here
<issue_comment>username_24: 'meteor update --release 1.3.4-rc.0' fixes it for my project, thanks @username_22
<issue_comment>username_25: Sorry to bother, would someone mind telling me how to update this? I am an extreme newbie. Thanks!
<issue_comment>username_26: Just run the meteor update from prompt,
$meteor update --release 1.3.4-rc.0
<issue_comment>username_25: @username_26 Thanks! I did it right.
<issue_comment>username_25: So, I did the upgrade and then received the below error: => Exited with code: 8
=> Your application is crashing. Waiting for file change.
Any ideas on what I am doing wrong?
Thanks!
Microsoft Windows [Version 10.0.10586]
(c) 2015 Microsoft Corporation. All rights reserved.
C:\Users\user> cd dev
C:\Users\user\dev> meteor update --release 1.3.4-rc.0
Installed. Run 'meteor update --release 1.3.4-rc.0' inside of a particular
project directory to update that project to Meteor 1.3.4-rc.0.
C:\Users\user\dev> cd image_share
C:\Users\user\dev\image_share> meteor update --release 1.3.4-rc.0
Changes to your project's package version selections from updating the
release:
babel-compiler upgraded from 6.8.2 to 6.8.3-rc.0
ecmascript upgraded from 0.4.5 to 0.4.6-rc.0
http upgraded from 1.1.6 to 1.1.7-rc.0
modules upgraded from 0.6.3 to 0.6.4-rc.0
templating upgraded from 1.1.11 to 1.1.12-rc.0
image_share: updated to Meteor 1.3.4-rc.0.
C:\Users\user\dev\image_share> meteor
[[[[[ C:\Users\user\dev\image_share ]]]]]
=> Started proxy.
=> Started MongoDB.
W20160621-00:47:36.293(-7)? (STDERR)
W20160621-00:47:36.544(-7)? (STDERR) app\server\main.js:1
W20160621-00:47:36.544(-7)? (STDERR) (function(Npm,Assets){(function(){import { Meteor } from 'meteor/meteor';
W20160621-00:47:36.559(-7)? (STDERR) ^^^^^^
W20160621-00:47:36.559(-7)? (STDERR) SyntaxError: Unexpected reserved word
W20160621-00:47:36.559(-7)? (STDERR) at C:\Users\user\dev\image_share\.meteor\local\build\programs\server\boot.js:292:30
W20160621-00:47:36.559(-7)? (STDERR) at Array.forEach (native)
W20160621-00:47:36.559(-7)? (STDERR) at Function._.each._.forEach (C:\Users\user\AppData\Local\.meteor\packages\meteor-tool\1.3.4-rc.0\mt-os.windows.x86_32\dev_bundle\server-lib\node_modules\underscore\underscore.js:79:11)
W20160621-00:47:36.575(-7)? (STDERR) at C:\Users\user\dev\image_share\.meteor\local\build\programs\server\boot.js:133:5
=> Exited with code: 8
W20160621-00:47:51.513(-7)? (STDERR)
W20160621-00:47:51.513(-7)? (STDERR) app\server\main.js:1
W20160621-00:47:51.513(-7)? (STDERR) (function(Npm,Assets){(function(){import { Meteor } from 'meteor/meteor';
W20160621-00:47:51.513(-7)? (STDERR) ^^^^^^
W20160621-00:47:51.513(-7)? (STDERR) SyntaxError: Unexpected reserved word
W20160621-00:47:51.528(-7)? (STDERR) at C:\Users\user\dev\image_share\.meteor\local\build\programs\server\boot.js:292:30
W20160621-00:47:51.528(-7)? (STDERR) at Array.forEach (native)
W20160621-00:47:51.528(-7)? (STDERR) at Function._.each._.forEach (C:\Users\user\AppData\Local\.meteor\packages\meteor-tool\1.3.4-rc.0\mt-os.windows.x86_32\dev_bundle\server-lib\node_modules\underscore\underscore.js:79:11)
W20160621-00:47:51.544(-7)? (STDERR) at C:\Users\user\dev\image_share\.meteor\local\build\programs\server\boot.js:133:5
=> Exited with code: 8
W20160621-00:48:02.262(-7)? (STDERR)
W20160621-00:48:02.264(-7)? (STDERR) app\server\main.js:1
W20160621-00:48:02.266(-7)? (STDERR) (function(Npm,Assets){(function(){import { Meteor } from 'meteor/meteor';
W20160621-00:48:02.269(-7)? (STDERR) ^^^^^^
W20160621-00:48:02.271(-7)? (STDERR) SyntaxError: Unexpected reserved word
W20160621-00:48:02.274(-7)? (STDERR) at C:\Users\user\dev\image_share\.meteor\local\build\programs\server\boot.js:292:30
W20160621-00:48:02.278(-7)? (STDERR) at Array.forEach (native)
W20160621-00:48:02.279(-7)? (STDERR) at Function._.each._.forEach (C:\Users\user\AppData\Local\.meteor\packages\meteor-tool\1.3.4-rc.0\mt-os.windows.x86_32\dev_bundle\server-lib\node_modules\underscore\underscore.js:79:11)
W20160621-00:48:02.296(-7)? (STDERR) at C:\Users\user\dev\image_share\.meteor\local\build\programs\server\boot.js:133:5
=> Exited with code: 8
=> Your application is crashing. Waiting for file change.
<issue_comment>username_22: Closing as fixed now; please reopen if the issue persists after `meteor update --release 1.3.4-rc.2`.<issue_closed> |
<issue_start><issue_comment>Title: No 'close' event when piped
username_0: When piping multiple source streams to a single through stream for the purposes of combining them, the source streams never close. The result is an "EventEmitter memory leak" for the `close`, `drain` and `finish` events. You can verify by adding the following code to through:
```node
stream.on('pipe', function(src) {
console.log('on pipe')
src.on('end', function() {
console.log('on end')
})
src.on('finish', function() {
console.log('on finish')
})
src.on('close', function() {
console.log('on close')
})
})
```
Here's the code to reproduce:
```node
var through = require('through')
var http = require('http')
var throughStream = through(null, function(){}) // Don't close the stream
throughStream.pipe(process.stdout)
http.createServer(function(req, res) {
req.pipe(throughStream)
res.end('hello world')
}).listen(8001)
```
I'm assuming this is an issues with the the destination (through) stream staying open, but I'm not sure what specifically about pipe is causing it or how to get around it ATM.
<issue_comment>username_1: @username_0 process.stdout is a bad example since its a writable stream that never ends.<issue_closed>
<issue_comment>username_0: I suppose this doesn't matter. I didn't realize `.pipe` supports this already:
```node
source.pipe(dest, {end: false})
```
FWIW, looks like `_stream_readable.pipe` adds a [`close` listener] (https://github.com/joyent/node/blob/master/lib/_stream_readable.js#L571-L582) on the destination stream for [cleanup purposes](https://github.com/joyent/node/blob/master/lib/_stream_readable.js#L487-L537). Further, since I wasn't passing `end: true`, `dest.end()` is called instead of `cleanup` directly. In through, `autoDestroy` is irrelevant since `end` checks `stream.readable` before destroying. Since destination is still readable it doesn't destroy and thus never closes.
This seems like a leaky abstraction and extremely backwards on core's part since the source stream leaks dependent on the destination stream implementation.
If you want this functionality with through, you have 2 options:
**Option 1: Use pipe options**
```node
let throughStream = through(null, ()=>{})
src.pipe(throughStream), {end: false})
```
**Option 2: Add the following to `through`**
```node
stream.on('pipe', src => {
// Unfortunately there's not a better way to do this
src.removeAllListeners('end')
src.once('end', ()=> src.unpipe(stream))
})
```
So, ya, just use `pipeOptions` I suppose.
<issue_comment>username_0: @username_1 The example / issue is more about multiple source streams than the destination stream. I've updated the example to eliminate the confusion you mention, but the issue still exists.
<issue_comment>username_2: @username_0 this is not really a case that node streams (classic or new) handles very well.
I mostly use [pull-streams](https://github.com/username_2/pull-stream) which have much more well defined behavior around closing and errors, and are much simpler.
Probably the simplest way to make this work with node streams is to use `setMaxListeners` https://nodejs.org/api/events.html#events_emitter_setmaxlisteners_n
<issue_comment>username_0: Thanks. I'll checkout `pull-streams`. The `end` option does solve this at
least from the source perspective, which shifts the conversation from "is
it possible" to "would it be better to be able to specify this in the
destination stream". |
<issue_start><issue_comment>Title: Fix path for tslintConfig in package.json
username_0: Do we want some test?
<issue_comment>username_1: This is fine if `tslintConfig` is a `string`, bit what if it's a configuration object?
In all reality though, nobody should be using `tslintConfig` anymore, everyone should use a `tslint.json` file
<issue_comment>username_1: Interesting, although the way the code works now is that it loads an object specified there and not a path. I'd rather not change this code - although hopefully nobody is using it, if they are might as well not change things. Sorry! |
<issue_start><issue_comment>Title: [WIP] Fixes #8289 added get_max_squared_sum
username_0: <!--
Thanks for contributing a pull request! Please ensure you have taken a look at
the contribution guidelines: https://github.com/scikit-learn/scikit-learn/blob/master/CONTRIBUTING.md#Contributing-Pull-Requests
-->
#### Reference Issue
<!-- Example: Fixes #1234 -->
#8289
#### What does this implement/fix? Explain your changes.
This fixes the need for a function that does element wise squaring and adding elements along row and finding the max among them
#### Any other comments?
<!--
Please be aware that we are a loose team of volunteers so patience is
necessary; assistance handling other issues is very welcome. We value
all user contributions, no matter how minor they are. If we are slow to
review, either the pull request needs some benchmarking, tinkering,
convincing, etc. or more likely the reviewers are simply busy. In either
case, we ask for your understanding during the review process.
For more information, see our FAQ on this topic:
http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention.
Thanks for contributing!
-->
<issue_comment>username_1: Btw, you should only mark as WIP something which you expect to do more work upon before a full review.
<issue_comment>username_2: Not sure this deserves its own function but whatever..
<issue_comment>username_2: thx.
<issue_comment>username_1: that's how I felt the second time I looked at it too :) |
<issue_start><issue_comment>Title: [Linux] Process.memory_maps() should remove `delete` from path
username_0: ```
======================================================================
FAIL: test_memory_maps (test_process.TestProcess)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/username_0/svn/psutil/psutil/tests/test_process.py", line 601, in test_memory_maps
os.path.islink(nt.path), nt.path
AssertionError: /run/shm/sem.SiOnY9 (deleted)
```<issue_closed>
<issue_comment>username_0: Fixed in cc16e96f18c9cce8258db9d40a16426e7268f7f9. |
<issue_start><issue_comment>Title: add Stanford talk
username_0:
<issue_comment>username_1: Its also on https://mvideos.stanford.edu/Seminar/582 and some people claim it hashes to
/ipfs/QmPkPwNx4rr9X9oma5BsMev6YT2GF9u6y1gDtLxG7NyPJq.
<issue_comment>username_2: We should make a "Media" page that has a big timeline with every single talk, paper, and links to press events. let's discuss it
<issue_comment>username_3: @username_2 I made https://github.com/ipfs/website/pull/71 , something like that, right? |
<issue_start><issue_comment>Title: decorator signature error using version >=3.4.0 on my Pi (Py3.2)
username_0: ## Issue Description
I went back to 3.3.0 now as I can not find anything about problems using new versions. Might my problem be related to python 3.2?
```
pi@raspberrypi ~/zeugs/bot $ sudo pip-3.2 install --upgrade 'praw==3.4.0'
Downloading/unpacking praw==3.4.0
Downloading praw-3.4.0.tar.gz (4.3Mb): 4.3Mb downloaded
Running setup.py egg_info for package praw
Downloading/unpacking decorator>=4.0.9,<4.1 (from praw==3.4.0)
Downloading decorator-4.0.10.tar.gz (68Kb): 68Kb downloaded
Running setup.py egg_info for package decorator
Downloading/unpacking requests>=2.3.0 (from praw==3.4.0)
Downloading requests-2.10.0.tar.gz (477Kb): 477Kb downloaded
Running setup.py egg_info for package requests
warning: no files found matching 'test_requests.py'
Downloading/unpacking six==1.10 (from praw==3.4.0)
Downloading six-1.10.0.tar.gz
Running setup.py egg_info for package six
no previously-included directories found matching 'documentation/_build'
Downloading/unpacking update-checker==0.11 (from praw==3.4.0)
Downloading update_checker-0.11.tar.gz
Running setup.py egg_info for package update-checker
Installing collected packages: praw, decorator, requests, six, update-checker
Running setup.py install for praw
Installing praw-multiprocess script to /usr/local/bin
Running setup.py install for decorator
Running setup.py install for requests
warning: no files found matching 'test_requests.py'
Running setup.py install for six
no previously-included directories found matching 'documentation/_build'
Running setup.py install for update-checker
Successfully installed praw decorator requests six update-checker
Cleaning up...
```
The new error:
```
comments = r.get_subreddit(SUBS_STRING).get_comments(limit=250)
File "/usr/local/lib/python3.2/dist-packages/praw/decorators.py", line 54, in wrapped
func_args = _make_func_args(function)
File "/usr/local/lib/python3.2/dist-packages/praw/decorator_helpers.py", line 33, in _make_func_args
func_items = inspect.signature(function).parameters.items()
AttributeError: 'module' object has no attribute 'signature'
```
lines are the same on praw 3.5.0
I uninstalled all dependencies and force installed 3.30 and it is working again.
## System Information
PRAW Version: 3.4 and 3.5
Python Version: 3.2
Operating System:
pi@raspberrypi ~ $ cat /etc/issue
Raspbian GNU/Linux 7 \n \l
pi@raspberrypi ~ $ cat /etc/debian_version
7.10
<issue_comment>username_1: Python 3.2 isn't an officially supported version anymore. Requests dropped support for it, and so has PRAW. https://github.com/kennethreitz/requests/pull/1326
It seems unrelated, but it's possible we're taking advantage of python 3.3+ features in the newer version of PRAWs. If PRAW3.3 works, then I say you should probably stick with that.
I'm going to close the issue due to the unsupported version. Please reopen if you experience the same thing with a newer version of python (or python 2.7).<issue_closed> |
<issue_start><issue_comment>Title: Make Clone a lang-item
username_0: moved from rust-lang/rust#23501
<issue_comment>username_1: Could this be implemented using default impls of `Clone` and `Copy`? I know that would be a pretty serious breaking change, because there could be types that are only sound because they don't implement `Clone`. But in principle, wouldn't closures be `Clone` if their environment is `Clone` if that were the case (or do default impls not apply to closures)?
<issue_comment>username_2: This is also needed to fix rust-lang/rust#28229
<issue_comment>username_3: Just thought I'd :+1: this and mention a couple use-cases I come across quite frequently:
- Iterator ergonomics - being able to clone Iterators with some `F: Fn + Clone` field
- real-time thread synchronisation - being able to clone a `F: Fn(&mut State) + Clone` and send it to multiple real-time threads across via channels without requiring `Arc`
<issue_comment>username_4: Meet this same issue, here is an example:
```rust
#[derive(Debug)]
struct Take(String);
fn make<F>(func: F) -> Box<Fn(&str) -> Take>
where F: FnOnce(String) -> Take + Copy + 'static
{
Box::new(move |s| {
func(s.to_string())
})
}
fn main() {
let m = make(|s|Take(s));
println!("{:?}", m("abc"));
}
```
should compile.
<issue_comment>username_5: I've started hitting this a lot, mostly in the "iterator ergonomics" that @username_3 mentioned. I'm not sure why `Clone` needs to be a lang item (maybe there's something I'm missing), but having appropriate closures implement `Clone` sure would be convenient.
<issue_comment>username_6: I've written this up as https://github.com/rust-lang/rfcs/pull/2132.<issue_closed>
<issue_comment>username_7: Since the RFC was accepted, closing in favour of the tracking issue https://github.com/rust-lang/rust/issues/44490 |
<issue_start><issue_comment>Title: useForeignObject
username_0: how to set useForeignObject in chartist to be false?
because i want to tansform ForeignObject in svg to canvas , but failed,
so I want to customise this option custom
<issue_comment>username_1: Hi,
Do you have a code example with just Chartist for what you trying to do ?
<issue_comment>username_0: finally, I find a way to tansform ForeignObject in svg to canvas through
removeAttr 'xmlns' in 'foreignObject span' for sake of the error about " reuse namespace "
before svg2canvas:
```javascript
$(svg).find('foreignObject span').each(function () {
$(this).removeAttr('xmlns')
})
```<issue_closed> |
<issue_start><issue_comment>Title: Export http payload
username_0: This PR is trying to solve https://github.com/elastic/beats/issues/2143
With this change, the user doesn't need to export the raw request and response in order to index in Elasticsearch the HTTP body. With this PR, the user can specify a list of contact types in the `include_body_for` configuration option, and export for those HTTP requests/responses the HTTP body under `http.request.body` or `http.response.body`.
The PR includes the following changes in the `http` module of Packetbeat:
- [x] group fields by request and response, so `http.request` and `http.response` are created
- [x] add `headers` to `http.request` and `http.response`. If `send_all_headers` is enabled, then all HTTP headers are exported, if it's disabled then only the `Content-Type` (if it's not empty) and `Content-Length` are exported.
- [x] add `body` to the `http.request` or `http.response` if the content-type is part of the `include_body_for` configuration option.
- [x] export `params` under `http.request`
- [x] export `code` and `phrase` under `http.response`
Here is how an HTTP event looks like:
```
{
"@timestamp": "2016-08-03T11:47:53.404Z",
"beat": {
"hostname": "mar.local",
"name": "mar.local"
},
"bytes_in": 1431,
"bytes_out": 1997,
"client_ip": "192.168.0.86",
"client_port": 50752,
"client_proc": "",
"client_server": "",
"http": {
"request": {
"body": "...",
"headers": {
"accept": "application/json, text/javascript, */*; q=0.01",
"accept-encoding": "gzip, deflate",
"accept-language": "en-US,en;q=0.8",
"connection": "keep-alive",
"content-length": "943",
"content-type": "application/x-www-form-urlencoded; charset=UTF-8",
"host": "192.168.0.86:9200",
"origin": "chrome-extension://lhjgkmllcaadmopgmanpapmpjgmfcfig",
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"
},
"params": "..."
},
"response": {
"body": "...",
"code": 200,
"headers": {
"content-length": "1909",
"content-type": "application/json; charset=UTF-8"
},
"phrase": "OK"
}
},
"ip": "192.168.0.86",
"method": "POST",
"path": "/logstash-*/_search",
"port": 9200,
"proc": "",
"query": "POST /logstash-*/_search",
"request": "...",
"response": "...",
"responsetime": 362,
"server": "",
"status": "OK",
"type": "http"
}
```
NOTE: the Packetbeat dashboards need to be updated in a separate PR
<issue_comment>username_1: We should mention the new `body` field in the Changelog.
<issue_comment>username_0: All the comments were addressed.
<issue_comment>username_0: The PR is ready for the final review. I added more tests to cover more use cases.
<issue_comment>username_0: I added a test with a wrong Content-Type value. The POST request generated by Console/Sense/curl contains `form-urlencoded` in the Content-Type, but the attachment is a JSON object. I opened an [issue](https://github.com/elastic/kibana/issues/7963) in Kibna. |
<issue_start><issue_comment>Title: Query by relations, yet another version
username_0: 
Models:
```
Customer = ds.createModel('customer', {
name: {
id: true,
type: String
},
vip: {
type: Boolean
},
address: String
});
Store = ds.createModel('store', {
id: {id: true, type: String},
state: String
});
Retailer = ds.createModel('retailer', {
id: {id: true, type: String},
name: String
});
// Relations
Customer.belongsTo(Store, {as: 'store', foreignKey: 'store_id'});
Store.belongsTo(Retailer, {as: 'retailer', foreignKey: 'retailer_id'});
```
Query by retailer name brings 2 INNER JOINS:
```
it('builds SELECT with deep inner join', function() {
var sql = connector.buildSelect('customer',
{order: 'name', limit: 5, where: {store: {retailer: {name: 'oxxo'}}}});
expect(sql.toJSON()).to.eql({
sql: 'SELECT `CUSTOMER`.`NAME`,`CUSTOMER`.`VIP`,`CUSTOMER`.`ADDRESS`,' +
'`CUSTOMER`.`STORE_ID` FROM `CUSTOMER`' +
' INNER JOIN `STORE` ON `CUSTOMER`.`STORE_ID`=`STORE`.`ID`' +
' INNER JOIN `RETAILER` ON `STORE`.`RETAILER_ID`=`RETAILER`.`ID`' +
' WHERE `RETAILER`.`NAME`=$1 ORDER BY `CUSTOMER`.`NAME` LIMIT 5',
params: ['oxxo']
});
});
```
Order clause brings LEFT JOIN:
```
it('builds SELECT with left inner and outer joins because', function() {
var sql = connector.buildSelect('customer',
{where: {store: {state: 'NY'}}, order: {store: {retailer: 'name'}}, limit: 5});
expect(sql.toJSON()).to.eql({
sql: 'SELECT `CUSTOMER`.`NAME`,`CUSTOMER`.`VIP`,`CUSTOMER`.`ADDRESS`,' +
'`CUSTOMER`.`STORE_ID` FROM `CUSTOMER`' +
' INNER JOIN `STORE` ON `CUSTOMER`.`STORE_ID`=`STORE`.`ID`' +
' LEFT JOIN `RETAILER` ON `STORE`.`RETAILER_ID`=`RETAILER`.`ID`' +
' WHERE `STORE`.`STATE`=$1 ORDER BY `RETAILER`.`NAME` LIMIT 5',
params: ['NY']
});
});
```
To make it work for REST HTTP requests I had to modify loopback-datasource-juggler. Method:
```
DataAccessObject._normalize = function (filter) {
```
I commented out all the logic about ```order``` parameter processing. As I need it to accept more complex objects. (But string and array are also supported)
<issue_comment>username_1: Can one of the admins verify this patch? To accept patch and trigger a build add comment ".*ok\W+to\W+test.*"
<issue_comment>username_2: Compare these where the first finds records where a customer field name matches a regexp and the second finds records where a joined table's field matches a value.
`Customers.find({"where": {"name": {"regexp": '^T'}}});`
`Customers.find({"where": {"store": {"state": 'NY'}}});`
I think table names as properties introduces ambiguity with existing conventions of filtering. Or is it a flexibility we should embrace, being backed by the model that will decide whether a table name or field name was meant?
How about this convention:
`Customers.find({"where": {"store.state": 'NY'}});`
<issue_comment>username_3: @username_2 `store` should be the name of relation.
<issue_comment>username_3: @username_0 Thank you for the patch. Have you incorporated/merged changes from https://github.com/strongloop/loopback-connector/pull/41?
<issue_comment>username_2: @username_3 Ah yes. Thanks. After giving it a second thought, the convention proposed by might be just fine. Thanks for your contribution @username_0 . I'll probably be using it soon.
<issue_comment>username_0: @username_3 Sorry, I didn't have a chance to use the code from #41, as I was playing with my fork in parallel with you.
My PR is only about inner (for where clause) and left outer (for order clause) joins of belongsTo relations.
There are now three PR about the same subject. So I really hope the loopback team can choose the best options and let us finally have that feature in near future ;)
<issue_comment>username_3: @username_0 Two comments:
- Does your PR only handle `belongsTo` relation?
- I'm not sure if the `limit` property inside `order` is good idea. Is it different to have a top-level `limit` under `filter`?
<issue_comment>username_0: @username_3
1) ```belongsTo``` only. As I do not have practical application of joining to ```hasMany``` right now. Can you think of something?
2) About limit. I'm not sure that I understand. ```limit``` and ```offset``` are params that have nothing to do with order that params are added after ORDER BY clause. Why do you ask?
<issue_comment>username_3: @username_0 Please ignore 2. I misread your example and thought the `limit` property is inside `order`.
<issue_comment>username_0: Hi @username_3! Is anything that I can do to speed up the process?
<issue_comment>username_4: Connect to : https://github.com/strongloop/loopback-connector/pull/31
<issue_comment>username_3: @username_0 Sorry for the delay. I want to go slowly here as it introduces a new programming model.
I also want to finish https://github.com/strongloop/loopback-connector/pull/44 1st.
<issue_comment>username_0: Hi @username_3, do you still want to finish #44 first? With all respect... I do not any see any progress for 8 days.
<issue_comment>username_5: @username_3 The Loopback issue this PR fixes (https://github.com/strongloop/loopback/issues/517) is extremely popular, is there any chance you could give @username_0 feedback on what needs to be done to merge this?
<issue_comment>username_6: @username_0 @username_3 are you guys still considering merging in this PR? We need this functionality in a project of ours using loopback and I'd be happy to help.
In fact, I forked @username_0's branch and added a couple fixes: https://github.com/username_6/loopback-connector/tree/query-relation
- Synced up branch with master
- Changing comparison operators from `==` to `===` broke `find` in MySQL connector
- `count` was broken querying related models because it was missing the `INNER JOIN` statement
Lastly, does anyone have ideas how this might be amended to include support for `hasMany` relations?
Thanks!
<issue_comment>username_1: Can one of the admins verify this patch?
<issue_comment>username_1: Can one of the admins verify this patch?
<issue_comment>username_0: @username_6: that's cool, can you send me PR to my username_0 branch please? I'm not a part of the team so it's not up to me to merge it to loopback. Joins are not the only thing that is not implemented, DISTINCT, GROUP BY are also missing. So @username_6 you need to think carefully before going forward with loopback in production.
<issue_comment>username_1: Can one of the admins verify this patch?
<issue_comment>username_0: It's now funny to read my more than one year old comments: _...I do not any see any progress for 8 days.._
<issue_comment>username_1: Can one of the admins verify this patch? |
<issue_start><issue_comment>Title: Changes to use httpxx in a CMake super project
username_0:
<issue_comment>username_1: HI @username_0 , thanks for taking the time to contribute this! Looks like your build is failing, can you check the Travis-CI log? If it passes, I'll merge it :-)
<issue_comment>username_0: Working on it. Seems due to an old cmake version on the Travis node - any chance you upgrade to Ubuntu 14.04 from the current 12.04?
<issue_comment>username_0: Btw, in https://github.com/username_0/CMake/blob/master/Findhttpxx.cmake is a find script for downstreams. The right solution would be to use the cmake EXPORT magic, but for this to work smoothly cmake 3 is required, and both httpxx and http-parser have to be tweaked, which I could not be bothered with.
<issue_comment>username_0: ping
<issue_comment>username_1: Thanks for contributing this :-)
<issue_comment>username_0: Thanks - FYI we will start using it soon for HBPVIS/zeq#115 |
<issue_start><issue_comment>Title: Copy changes per maria's request
username_0: #### What's this PR do?
Marketing changes requested by Maria, so she can send the live site around for final copy approval.
See: https://docs.google.com/document/d/1qvuiTVEOR-sYMOpvDocQKTs5H5Aym1aSekewAQO_ma4/edit
<issue_comment>username_1: We're spelling it "Master's" in some places and "master's" in others. Can we pick one? |
<issue_start><issue_comment>Title: backspace key is the worst
username_0: Please provide a succinct description of the issue.
#### Repro steps
Provide the steps required to reproduce the problem
Type some code.
Click logs
Decide to delete some code and press backspace.
#### Expected behavior
Nothing happens.
#### Actual behavior
Code changes all lost as browser navigates back to previous page.
#### Known workarounds
Dig out your backspace key and throw it into Mt Doom.
#### Related information
Chrome has signalled they will remove backspace as a navigation for back arrow.
But Azure Functions shouldn't let me navigate back, or shouldn't let me navigate without a dialog that says I have unsaved content.<issue_closed>
<issue_comment>username_1: Moved this issue to the portal repository. |
<issue_start><issue_comment>Title: Minimal Loader Build
username_0: This creates a minimal loader build of 4.6KB minified and gzipped.
Next step is to add System.register extension to that to give us the minimal production loader necessary.
<issue_comment>username_0: This has now been merged. |
<issue_start><issue_comment>Title: Nested messages with `oneof` lose field name
username_0: protobuf.js version: 4.6.5
Normally, if `message` has a oneof field named `foo`, it has a string located at `message.foo` which indicates the field name of the oneof value. This breaks when creating a message with a nested oneof from a pure JS object:
```js
'use strict';
const Protobuf = require('protobufjs');
const prototext = `
message Foo {
oneof a {
string b = 1;
string c = 2;
}
}
message Bar {
optional Foo foo = 1;
}
`;
const p = Protobuf.parse(prototext).root.nested;
let foo = p.Foo.create({ b: 'hi' });
console.log(foo.a); // "b"
let works = p.Bar.create({ foo: foo });
console.log(works.foo.a); // "b"
let broken = p.Bar.create({ foo: { b: 'hi' }});
console.log(broken.foo.a); // "undefined"
```
<issue_comment>username_1: This "breaks" because `.create` just sets the properties. You have two options there:
```js
p.Bar.create({ foo: p.Foo.create({ b: 'hi' })}) // recommended
// or
p.Bar.from({ foo: { b: 'hi' }}) // uses converters, but is slower
```
<issue_comment>username_1: Closing this issue for now as it hasn't received any replies recently. Feel free to reopen it if necessary!<issue_closed> |
<issue_start><issue_comment>Title: Selective mocking with fallback example needs to specify `fallThrough: true`
username_0: https://github.com/Workiva/w_transport/blob/master/docs/guides/MockSelectiveMockingWithFallback.md
<issue_comment>username_1: It looks like you have not included any tickets in the title of this issue. Please create a ticket for this issue, or click <a href='https://w-rmconsole.appspot.com/api/v1/issues/Workiva/w_transport/242/addTicketToTitle'>here</a> to have Rosie create one for you. |
<issue_start><issue_comment>Title: Inability to match HTML files should give an error
username_0: I had a configuration of useminPrepare that didn't match any HTML files. It would be great if the useminPrepare task would give an error / warning / log message that let me know this is why the downstream task configuration is undefined.
`````
Running "useminPrepare:html" (useminPrepare) task
Going through to update the config
Looking for build script HTML comment blocks
Configuration is now:
concat:
{ dist: {} }
uglify:
undefined
cssmin:
undefined
<issue_comment>username_1: Should be covered by #518 |
<issue_start><issue_comment>Title: Go through the docs and add better examples
username_0: It's good to have one basic example that demonstrates how the function works (`point = 1,2,5; set.contains(point);`), but then maybe also write one that's more realistic. You know like when I say "If I fireball is shot at this spot, will the player be in range?" and stuff like that.
<issue_comment>username_0: Explanation of flood_with_strength(): "A goblin is standing at point X. A fireball explodes at point Y. Are there any walls blocking the fireball from burning the goblin?"
Write good examples with good descriptions and variable names... like a better version of this pseudo code:
```
bad_guys = set_of_bad_guys()
target = fireball_coordinates();
explosion_range = flood_with_strength(target, args, args);
hurt_guys = explosion_range.intersect(bad_guys);
```
<issue_comment>username_0: Similarly, go through code and use better names. Declare some variables before doing if statements so that the intent of the if statement becomes clearer. Stuff like that,
```
// bad
if point == other {
do(point);
}
// good
duplicated = old == new;
if duplicated {
delete(old);
}
``` |
<issue_start><issue_comment>Title: feat(apps/whale): improve getsubgraphswaps query perf with inmemory indexer
username_0: <!-- Thanks for sending a pull request! -->
#### What this PR does / why we need it:
- Drastically improves performance of `/getsubgraphswaps`
#### Which issue(s) does this PR fixes?:
Part of #1024
#### Additional comments?:
- When the inmemory indexer is catching up, controller will rely on the old logic of brute force search for swaps starting from the chain tip
- Indexer will take over all requests when it's caught up with chain tip
<issue_comment>username_1: Just throwing out my thoughts - I was thinking whether this might be too complex and about whether we can reduce the complexity.
Currently, the implementation is somewhat of a push and pull design both running at the same time. Pulling when the indexer hasn't caught up. Pushing it so that it catches up. This implementation creates cognitive complexity since the implementation is convoluted to understand.
The other way we could do it is having it as "pull" only, where you cache "Most Recent Swap" via a timestamp. Using `txid` as the key since `txid` lookup is the most expensive part of the logic (Block -> Txn in Block -> Vout -> History). You could effectively cut down on the lookup cost by caching "Txn in Block" -> onwards. So we cache last 10k records of the most expensive part?
I am not sure if this makes sense, but might reduce complexity since you remove the "indexer" part of the equation by relying on a pull design when controller load. We can also run the pull logic on a interval to keep caching.
<issue_comment>username_0: Yea agree that it's somewhat convoluted, although it's following whale's indexer design with the concurrent requests tweak for performance.
Your approach sounds a lot simpler - lemme try it out and see if we can achieve similar performance gain |
<issue_start><issue_comment>Title: Scan vs Query based on params
username_0: At the moment, get_query just returns scan. The normal params (first, after, last, etc.) are not translated to the dynamodb equivalents.<issue_closed>
<issue_comment>username_0: At the moment, get_query just returns scan. The normal params (first, after, last, etc.) are not translated to the dynamodb equivalents.<issue_closed>
<issue_comment>username_0: Relationships are now fully supported and tested. |
<issue_start><issue_comment>Title: Add support for python 3.5
username_0: I added python3.5 to the `envlist` in `tox.ini` and updated the project so that all the tests continued to pass under all the configurations in the `envlist`.
```
$ tox
... snip ...
OK
summary
py27-django15: commands succeeded
py27-django16: commands succeeded
py27-django17: commands succeeded
py27-django18: commands succeeded
py27-djangorq: commands succeeded
py35-django18: commands succeeded
congratulations :)
```
<issue_comment>username_1: Can you update the classifiers in setup.py to have:
`Programming Language :: Python :: 2.7`
`Programming Language :: Python :: 3.5`
<issue_comment>username_0: I think I hit all the comments from review. Let me know if there is anything else.
<issue_comment>username_2: π |
<issue_start><issue_comment>Title: Question on loading files
username_0: ```javascript
gulp.src([file1path, file2path])
.pipe(concat(fileName)) //file1content + file2content
```
gulp ensures that the files are processed in the order in which they are specified in src array?
<issue_comment>username_1: ```
var concat = require('gulp-concat');
gulp.task('scripts', function() {
gulp.src(['./lib/file3.js', './lib/file1.js', './lib/file2.js'])
.pipe(concat('all.js'))
.pipe(gulp.dest('./dist/'))
});
```
https://github.com/wearefractal/gulp-concat#usage
Anyway, this is not the best place to ask support questions, especially when the answer is readily available in the docs or can be quickly verified :-)
<issue_comment>username_2: Yes, but plugins are not ordered. So if you put anything between gulp.src and the concat plugin it might lose the order. You really shouldn't structure your app like this. Use a module loader.<issue_closed> |
<issue_start><issue_comment>Title: Rate Limiting
username_0: Reworks how rate limits are handled by the API and resolves a couple of issues in how checks against rate limits happen and where they occur in loops.
The idea is that we attach a `RateLimit` object to the Api at `Api.rate`_limit. This then stores and retrieves information from Twitter as to the limits in place for endpoints. Since `_RequestUrl` already knows what URL it's going to be hitting, it should be the function to check for the limit status of that URL. This save everybody the trouble of :
* figuring out what endpoint you're working with (see the non-standard endpoints in ``twitter/ratelimit.py`` for reference);
* manually calling a check on a rate limit within a method (this was pretty inconsistent anyway); and
* figuring out where in loops to call the check on the rate limit (this was brought up in #158 and #274 #71).
The `rate_limit` object is responsible for parsing the URL from `_RequestUrl` and divining an endpoint from that. It'd be super nice if Twitter just told you want endpoint you just requested, but they don't, hence the ugly regex in ``twitter/api.py`` to figure it out. After that, the RateLimit object returns the limit as a named tuple with attributes of ``limit``, ``remaining``, and ``reset`` and if ``limit`` is 0 and `reset` is in the future, the Api will sleep the appropriate amount of time before trying to make the GET/POST request.
Per #274 I'd considered adding a ``break_on_rate_limit`` kwarg to the API, but that turns out to be a ton of work on every API method to figure out when and how to break out and return the objects, so I'm leaving that off of this PR.
@username_1 let me know what you think of this approach; it's a lot to parse through, but I think it addresses most of the issues people were having. I'm happy to continue work on it and it'd be nice to have another set of eyes.
<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.svg" height="40" alt="Review on Reviewable"/>](https://reviewable.io/reviews/username_1/python-twitter/299)
<!-- Reviewable:end -->
<issue_comment>username_1: that is a thing of beauty - well done sir!
+1
<issue_comment>username_0: Wow, thanks! Do you think we should close those issues since I think the problems encountered are mostly fixed? I forgot to mention that this kind of changes the behavior of the API a bit in that you have to specifically request that the API sleep when it hits a rate limit. I can write up some documentation for the change, since I think it might catch folks off-guard.
<issue_comment>username_1: +1 to closing the issue |
<issue_start><issue_comment>Title: Rule to limit variable statements to a single statement
username_0: It would be nice if we could disallow the following in our codebase:
```TypeScript
var x: string, y: number;
```
<issue_comment>username_1: to clairfy: do you mean disallow multiple variable definitions in the same statement?
<issue_comment>username_0: Yup, sorry about that. It was fairly late.
<issue_comment>username_2: Looks like the corresponding eslint rule for this is called `one-var`: http://eslint.org/docs/rules/one-var
<issue_comment>username_1: :+1:
<issue_comment>username_3: related PR: #1194
<issue_comment>username_4: Decided not to go with `one-var` as it's a somewhat brief name, instead we'll use `no-multiple-variable-declaration`, which fits TSLint's style better hopefully. Thanks to @username_3 for implementing this! It'll be around in the next release. (Still up for renaming things if people feel strongly about it one way or the other)<issue_closed>
<issue_comment>username_4: Actually, I think `no-multiple-variable-declaration` is ambiguous in which style it's actually disallowing. Now thinking about `one-variable-per-declaration` instead |
<issue_start><issue_comment>Title: py3: __getslice__() is not in py3
username_0: __getslice__ method is deprecated since 2.6 and removed in py3, and should be replaced with __getitem__ with slice index
https://docs.python.org/2/library/operator.html#operator.__getslice__
http://stackoverflow.com/questions/22278457/list-delslice-python-3<issue_closed>
<issue_comment>username_1: I'm clsoing this issue as the only __getslice__ that are left are in wiredata, and we want them |
<issue_start><issue_comment>Title: Covers Not Updating
username_0: The covers of mangas will not update, I have to go through each manga and manually refresh them to get the covers to update, and appear in my library and catalogue.
Version: r000 or v2.2.0
(other relevant info like OS)<issue_closed>
<issue_comment>username_1: #370 and #334 |
<issue_start><issue_comment>Title: Add the precision option.
username_0: All: I wanted to make sure I tested both `ruby-sass` and `libsass`; hence the refactoring of the tests. It's probably a little controversial, so I'll defer to your style demands.
I also added a guard on the `ruby-sass` tests because they run long and, on Travis, I'm sure they'll run for incredibly long.
<issue_comment>username_1: Looks pretty good, but please recombine the test file. Wrapping the suite in a function within the same module will work just as well (see chokidar's tests for an example of this being done to run the same tests with different settings). |
<issue_start><issue_comment>Title: ENH: Added matrix_normal method to return equivalent multivariate_normal
username_0: For #5478
- A method which returns the equivalent multivariate normal distribution
- Some additions/improvements to the documentation
<issue_comment>username_0: Could still use some more documentation. I've just lazily linked the Wikipedia article for now. |
<issue_start><issue_comment>Title: Deleting a page which is linked with a smartcontent breaks the homepage
username_0: | Q | A
| --- | ---
| Bug? | yes
| New Feature? | no
| Sulu Version | 1.4.4
| Browser Version | any
#### Actual Behavior
I have set a smartcontent block on the homepage. I set the source of the smartcontent to specific content page. Then I deleted that specific page.
By requesting the homepage I got following error:
Item b8d3f9e6-b6dc-4b6f-a9fd-04a582a1c3ca not found in workspace default_live.
Everything worked again as I deleted the smartcontent block on the hompage. (Reseting the source of SC would probably have worked too)
#### Expected Behavior
As smartcontent was still looking for an none existing page, we might just remove its source when deleting page the SC is pointing to.<issue_closed>
<issue_comment>username_1: fixed by #3170 |
<issue_start><issue_comment>Title: Problem with React Native and react-dom
username_0: When I try to use react-redux-form in my React Native application I get the following error:
`EventPluginRegistry: Cannot inject event plugin ordering more than once. You are likely trying to load more than one copy of React.`
This is the same issue as [this one](https://github.com/facebook/react-native/issues/8007).
This problem makes react-redux-form unusable with React Native.
I have to admit that I am relatively new to developing with React and React Native, so maybe I am missing something simple?
If you need some more information just let me know what you need.<issue_closed>
<issue_comment>username_1: @username_0 I found the issue, was able to replicate it in a React Native test project, and went ahead and fixed it. Thanks for the report! |
<issue_start><issue_comment>Title: Passing the pubProfile path needs to be optional
username_0: Right now the script fails when the publishProperties is populated but the pubProfile path is not provided. pubProfilePath should be optional
<issue_comment>username_1: How are tests passing then?
<issue_comment>username_0: Invoking the scripts from VS was failing. Not the tests. I have made the changes in VS to pass the pubProfilePath for now. Once this issue is fixed, will revert the change in VS.
<issue_comment>username_1: So it's the bug in vs or in the script/module?
<issue_comment>username_0: The bug is in the module. PubProfilePath has a notnull check. This check needs to be removed. I can make this change. Will also add a test to test this scenario.
<issue_comment>username_1: @username_0 OK thanks, i see it now. I was just trying to make sure that I understood why it's failing in VS but not the command line. From the command line the validate checks are only invoked if the `-pubProfilePath` parameter is passed. From VS I'm guessing that you always pass it and sometimes the value is null.
<issue_comment>username_0: From VS it is failing even if I am not passing the parameter at all -pubProfilePath.
Cannot validate argument on parameter 'pubProfilePath'. The argument is null, empty, or an element of the argument collection contains a null value. Supply a collection that does not contain any null values and then try the command again.
If I remove the validation on pubProfilePath it works.
[Parameter(Position=2,ValueFromPipelineByPropertyName=$true)]
[ValidateNotNull()]
[ValidateScript({Test-Path $_})]
[System.IO.FileInfo]$pubProfilePath
<issue_comment>username_1: Go ahead and remove it.<issue_closed> |
<issue_start><issue_comment>Title: [Finishes #88467502] fix jigsaw notches
username_0: 
instead of

(where notches are both smaller and not properly lined up).
This isn't necessarily the cleanest way to fix this, but it is the easiest, and I'd rather not spend a lot of time on it.
A while back, I moved the logic for drawing notches into connection. This was done so that we could toggle the notch drawing based on input type. Doing that, I forgot that Jigsaw set some of our notch drawing constants differently. If we had the eye tests in place, they would likely have caught this.
<issue_comment>username_1: Nice, I'll watch for the eyes test baseline change when we DTT
<issue_comment>username_1: LGTM |
<issue_start><issue_comment>Title: Handle misuse of kind
username_0: Have the generator detect the misuse of kind, generate current sources.
Fixes #63 & #92.
<issue_comment>username_1: Excellent! May I just suggest you to split commit 4f0604bc9c03b77c412aaba0cca376f26d0e03fd into two commits: a first one that just updates the generated files and a second one with the changes to the generator and the generated files. This would make the new `isKindValidForClassRegistry` methods added in the generated classes stand out.
<issue_comment>username_0: Will do, just threw this together sorta quickly to let you see it while testing things out, I'll reslice things in a moment here. |
<issue_start><issue_comment>Title: What is the equivalent of fetch in gitless?
username_0: Someone updated the remote git repository with a new branch, with git I'd do a git fetch and then I could git checkout that branch. How do I access that new branch with gitless? Thanks
<issue_comment>username_1: Let's say your remote is `origin` and the new branch is `foo`. Then, to create a local branch from a remote branch, you would do `gl branch -c foo -dp origin/foo`. This command is saying "create a local branch of name `foo` with a head equal to the head of the remote branch `origin/foo`". This will take care of downloading any necessary commits (i.e., doing a `git fetch` for you)
Let me know if it worked, thanks
<issue_comment>username_0: Works, thanks.
How do I get a list of all the branches in the remote repository; that may not exist locally?
<issue_comment>username_1: `gl branch -r` will list remote branches |
<issue_start><issue_comment>Title: esp8266 network run wlan.connect esp8266 restart !
username_0: The firmware version for:MicroPython v1.6-312-gb4070ee-dirty on 2016-03-29; ESP module with ESP8266
when I run follow code,esp8266-12f restart.
import network
wlan = network.WLAN(network.STA_IF)
wlan.active(True)
wlan.isconnected()
False
wlan.connect('essid', 'password') #essid and password from router distribution
the esp8266-12f restart every time.
How to fix this issue?Tks!
<issue_comment>username_1: Ack, we'll be looking what went wrong.
<issue_comment>username_1: Ok, this was worked around in master. We'll approach fixing it better soon.<issue_closed> |
<issue_start><issue_comment>Title: Implement Windows support (issue #53)
username_0: I tried to implement Windows support #53
* It works for me.
* I tested it on Win7 / Python 3.4.3
* All tests pass for me on Windows and two are skipped because are not relevant for Windows.
<issue_comment>username_1: Hi Martin and thanks for the effort you've put into this pull request!
It's funny, I was recently looking into AppVeyor for testing several of my Python projects on Windows (pipβaccel being one of those projects) and here you are wanting to run pipβaccel itself on AppVeyor just a few weeks later :-).
I'm planning to get this pull request merged this week. I've already started but won't finish it tonight and I thought I should let you know instead of leaving you waiting without feedback.
<issue_comment>username_1: For posterity: I'm working on setting up [AppVeyor CI for pip-accel](https://ci.appveyor.com/project/username_1/pip-accel/history) because I don't want to claim support for a platform I can't actually test, I don't have access to any Windows systems and I don't feel like paying for a Windows license just so I can test an open source project. Fortunately AppVeyor is free for open source projects which is pretty cool :-).
<issue_comment>username_0: @username_1 This is appveyor project where I use `pip-accel`: https://ci.appveyor.com/project/username_0/pyinstaller
<issue_comment>username_1: Hi again Martin,
After some intense Windows testing and debugging (last time I did that was quite a few years ago :-) I released pip-accel 0.33 to PyPI and GitHub. If you're interested in the additional changes I made (quite a few) you can take a look at pull request #61 (which included the changes from this pull request). If you encounter any further Windows issues feel free to open another issue.
Thanks for your contribution!
<issue_comment>username_0: @username_1 Thanks for integrating and cleaning up windows support. |
<issue_start><issue_comment>Title: [WIP] Make presenters work with the api
username_0: This is just a proof of concept of how can we make the presenters work with the API responses. What do you think @username_1 @GrahamCampbell ?
Also this fixes two bugs on this PR https://github.com/cachethq/Cachet/issues/625
<issue_comment>username_1: Looks great to me!
<issue_comment>username_0: @username_1 All done :) merge at will.
<issue_comment>username_1: Yes! :D
<issue_comment>username_1: :dancer: |
<issue_start><issue_comment>Title: Getting column sizing from other table
username_0: I'm trying to implement floatThead and make it IE8 friendly, using the getSizingRow. The problem is, my table initially has all of its rows colspan'd, so it has nothing to work from.
Is it possible to reference another table's row (possibly hidden) which has the same column sizing? Or is there another way? |
<issue_start><issue_comment>Title: Fix the namespacing of the gem
username_0: There was inconsistent namespacing throughout the gem, this fixes that issue.
Pretty much everything should now be namespaced under `Spree::Retail`.
<issue_comment>username_0: @bryanmtl @username_2 @username_1
this is reaaaddyyy to rummbllleee!
<issue_comment>username_1: Love it!
<issue_comment>username_2: Will check it out, thanks for the refactor @username_0
<issue_comment>username_2: @username_0 this really should be golden besides the merge conflict. |
<issue_start><issue_comment>Title: Missing license and copyright information
username_0: You do not have any copyright or license information available for the tool. The tool is of little use without this information, since we are not allowed to distribute it.<issue_closed>
<issue_comment>username_1: https://github.com/UniconLabs/java-keystore-ssl-test/commit/3e9f3815bbdc1b5a35ed3c2310ea448131d8f789 |
<issue_start><issue_comment>Title: please advice
username_0: Hi Anseki,
this library is fantastic and it inspired me to create a similar thing on my own but customized to fit some restrictions I have on work,
just looking quickly through your source code, I see you are using the term "sockets" for "top middle" point, "left middle" point etc... of the rectangle
in my code I have the same "sockets" (4 sockets for each rectangle) and I loop through each one in rectangle1 and compare the length toward each one in rectangle2 to get the shortest distance
so in total I have 4x4=16 iterations to decide which one is the shortest
Is that what you are doing with leader-line or you are using some (better) method?
Thank you
<issue_comment>username_1: Hi @username_0, thank you for the comment.
If you want to reduce iterations, for example, you can omit the left-socket when the target is positioned on the right side, and you can omit the top-socket when the target is positioned on the bottom side. |
<issue_start><issue_comment>Title: Fix diff output of test runs for Debian slave interfaces
username_0: ### What does this PR do?
The diff shown after running `salt '*' state.apply test=True` was reporting changes in `network.managed` for slave interfaces on Debian, even though nothing changed in the states/pillars and nothing was going to be changed by `salt '*' state.apply`.
### Previous Behavior
Running `salt '*' state.apply test=True` with a state like this:
```
ens2f1:
network.managed:
- type: slave
- master: bond0
```
showed changes every time:
```
ID: ens2f1
Function: network.managed
Result: None
Comment: Interface ens2f1 is set to be updated:
---
+++
@@ -1,4 +1,3 @@
auto ens2f1
-iface ens2f1 inet manual
- bond-master bond0
+iface ens2f1 inet static
Started: 09:07:41.719716
Duration: 1.91 ms
Changes:
```
even though nothing was going to be changed by `salt '*' state.apply`.
### New Behavior
The test run now correctly shows no changes when `salt '*' state.apply` is not going to change anything.
### Tests written?
No
<issue_comment>username_0: Go Go Jenkins!
<issue_comment>username_1: Go Go Jenkins!
<issue_comment>username_1: Thanks for submitting this fix @username_0. This looks good to me, but I want to make sure pylint runs completely. We'll get this in if Pylint comes back clean here. :) |
<issue_start><issue_comment>Title: Appliance repo shouldn't be in www root.
username_0: manage-appliance repo shouldn't be in www root.
Related PRs:
* https://github.com/ManageIQ/manageiq-appliance-build/pull/33 [ clone repo in new location ]
* https://github.com/ManageIQ/manageiq-appliance/pull/12 [set environment variable for new location]
* https://github.com/ManageIQ/manageiq/pull/4105 [use environment variable above]
<issue_comment>username_0: @simaishi @carbonin @Fryguy please review.
<issue_comment>username_1: I like this change, but should we create a symbolic link?
my muscle memory and many docs say `/var/www/miq/...`
<issue_comment>username_2: I like it too, but it feels like we're not all there. I vote for moving both rails app and appliance files there.
/opt/manageiq/manageiq/
/opt/manageiq/manageiq-appliance/
And the appropriate links from /var/www/miq/ to minimize field shock, and updates to restorecon's.
Thoughts ?
<issue_comment>username_0: We chickened out. @Fryguy, @simaishi, and I thought that moving manageiq out of vmdb was too risky.
There are a few things that use realpath code in ruby, like bundler binstubs, so a symlink was not really tried. I guess we could try it, I'm not sure how much work it was. manageiq-appliance doesn't have the same risk since we just extracted it a few weeks ago.
<issue_comment>username_0: Also, keep in mind that ruby's require will require the same file twice (once via symlink, once via real path) so you could get already initialized constant warnings and worse if we traverse both real and symlink to require a file.
<issue_comment>username_0: See: https://github.com/ManageIQ/manageiq/pull/3451
<issue_comment>username_0: I hope to remove the symlinks for old lib and old system soon too.
<issue_comment>username_0: @username_1 @username_2 what do you think? I'm concerned with using symlinks due to it's limitations with require and code really needs to learn to work without symlinks. With that said, we don't intend to move MIQ code out of /var/www/miq/vmdb yet. This is only for manageiq-appliance code, former system directory.
<issue_comment>username_1: we'll need to update selinux directories if we move. symlink won't protect us
:+1: moving the appliance code. I don't think our code links into there (didn't check) |
<issue_start><issue_comment>Title: Packagist.org adding NON-JSON text to the Composer JSON output.
username_0: Running the following command inserts non-JSON text into the output from Packagist.org. This does not happen with private REPO's.
When I run this command:
```
$ php composer.phar show aws/aws-sdk-php --outdated --format=json
```
I get the following output:
```
Info from https://repo.packagist.org: #StandWithUkraine
{
"name": "aws/aws-sdk-php",
"description": "AWS SDK for PHP - Use Amazon Web Services in your PHP project",
"keywords": [
"amazon",
"aws",
"cloud",
"dynamodb",
"ec2",
"glacier",
"s3",
"sdk"
],
"type": "library",
"homepage": "http://aws.amazon.com/sdkforphp",
"names": [
"aws/aws-sdk-php"
],
"versions": [
"3.210.0"
],
"licenses": [
{
"name": "Apache License 2.0",
"osi": "Apache-2.0",
"url": "https://spdx.org/licenses/Apache-2.0.html#licenseText"
}
],
"latest": "3.215.0",
"source": {
"type": "git",
"url": "https://github.com/aws/aws-sdk-php.git",
"reference": "b5cd025eb804a023f4d9fe7a4f6de437b5910a13"
},
"dist": {
"type": "zip",
"url": "https://api.github.com/repos/aws/aws-sdk-php/zipball/b5cd025eb804a023f4d9fe7a4f6de437b5910a13",
"reference": "b5cd025eb804a023f4d9fe7a4f6de437b5910a13"
},
"path": "/var/www/narware-vendor/vendor/aws/aws-sdk-php",
"suggests": {
"aws/aws-php-sns-message-validator": "To validate incoming SNS notifications",
"doctrine/cache": "To use the DoctrineCacheAdapter",
"ext-curl": "To send requests using cURL",
"ext-openssl": "Allows working with CloudFront private distributions and verifying received SNS messages",
"ext-sockets": "To use client-side monitoring"
},
"support": {
"forum": "https://forums.aws.amazon.com/forum.jspa?forumID=80",
"issues": "https://github.com/aws/aws-sdk-php/issues",
"source": "https://github.com/aws/aws-sdk-php/tree/3.210.0"
},
"autoload": {
"psr-4": {
"Aws\\": "src/"
}
},
"requires": {
"aws/aws-crt-php": "^1.0.2",
"ext-json": "*",
"ext-pcre": "*",
"ext-simplexml": "*",
"guzzlehttp/guzzle": "^5.3.3|^6.2.1|^7.0",
"guzzlehttp/promises": "^1.4.0",
"guzzlehttp/psr7": "^1.7.0|^2.0",
"mtdowling/jmespath.php": "^2.6",
"php": ">=5.5"
},
[Truncated]
NOT HAVE non-JSON text added to the output!
```
Info from https://repo.packagist.org: #StandWithUkraine <--- THIS IS BAD JSON
{
"name": "aws/aws-sdk-php",
"description": "AWS SDK for PHP - Use Amazon Web Services in your PHP project",
"keywords": [
"amazon",
"aws",
"cloud",
"dynamodb",
"ec2",
"glacier",
"s3",
"sdk"
],
...
```
<issue_comment>username_1: maybe we could have the info message attach itself to the first package
```
Info from https://repo.packagist.org: #StandWithUkraine <--- THIS IS BAD JSON
{
"name": "aws/aws-sdk-php",
```
becomes
```
{
"meta-not-related-to-package-from-composer": "#StandWithUkraine"
"name": "aws/aws-sdk-php",
```
<issue_comment>username_2: The info message is sent to STDERR, not to STDOUT. So it does not mess with the json.
<issue_comment>username_0: Why is the message only error-ing out with Packagist.org and not GitHub?
<issue_comment>username_3: If you only use STDOUT output to parse json you won't get any problem. You can also run `composer.phar show aws/aws-sdk-php --outdated --format=json 2>/dev/null` to suppress STDERR in this case.<issue_closed>
<issue_comment>username_0: Thanks for the tip, username_3. That works, BUT I shouldn't have to do this.
<issue_comment>username_3: You shouldn't have to thanks to the separation of STDOUT/STDERR by default. If you don't have them separated it's most likely due to something else you do to merge all output..
<issue_comment>username_0: It's just the normal output from PHP's `shell_exec()` command. Normally if there is no error, it's empty.
<issue_comment>username_2: if you expect the output to be valid json when using `--format=json` but you merge STDOUT and STDERR together, then the only way this can happen is if STDERR, in which case there is no need to merge them...
<issue_comment>username_3: See also https://unix.stackexchange.com/a/336989
<issue_comment>username_0: I've already made my point. STDERR is for errors, and within that sense, other diagnostics for sure.
POSIX is IN, OUT, ERR. Not IN, OUT, WHATEVER.
With all due respect, you're just making stuff up here. It's a BAD practice. Yes, people do all kinds of crap in programming, this is a prime example. You can post whatever links you like, but if there's no error, leave the STDERR blank. Put your fun little message in the payload as properties we can just ignore but DO NOT break the code.
If I send a flag for --verbose or --diag sure, shove all the junk you want into STDERR, but outside of that we should not be seeing anything in STDERR.
I'm done, gentlemen. Please stop shoving crap into STDERR when there is no error. Have a nice day. |
<issue_start><issue_comment>Title: Swift 3.0 branch SR1951 Fix 2
username_0: <!-- What's in this pull request? -->
* Explanation: IRGen failed to update one of its data structures which allows to lookup arche type access types. This can lead to crashes when the data structure is consulted.
* Scope: This occured on a github project. It can occur with generic types with same type constraints.
* Origination: I think this was introduced within the swift 3 time frame.
* Reviewed by: John McCall, Slava Pestov
* Testing: The project that crashes, compiler unit tests
* Directions for QE: This bug is tested as part of the compiler unit tests.
<!-- If this pull request resolves any bugs in the Swift bug tracker, provide a link: -->
Resolves one bug of [SR-1951](https://bugs.swift.org/browse/SR-1951).
<issue_comment>username_0: @swift-ci Please smoke test OS X Platform
<issue_comment>username_0: @swift-ci Please test |
<issue_start><issue_comment>Title: The appengine code is not functional with GAE_USE_SOCKETS_HTTPLIB with Google IP ranges
username_0: (This was moved from the old repo)
Currently, there's some code for appengine in, but it will only work when GAE_USE_SOCKETS_HTTPLIB is not set, because otherwise the regular httplib will be returned.
This is a problem when trying to use googleapiclient from appengine, with that option set. The workaround, for us, is to monkey patch httplib2.SCHEME_TO_CONNECTION with the versions of the AppEngineHttp[s]Connection objects using the urlfetch httplib and then httplib2.Response to change httplib.HTTPResponse to the correct instance.
This has the drawback of being global and I also wanted to get some feedback on whether you think this is a problem and if so, how this should be fixed. My best idea so far is to be able to pass a httplib to httplib2.Http so that you can select per instance what is going to be used
<issue_comment>username_1: It's a tough question to answer -- whether to obey the instance environment setting or continue to use App Engine's URL Fetch service.
- https://cloud.google.com/appengine/docs/python/issue-requests
- https://cloud.google.com/appengine/docs/python/sockets/
<issue_comment>username_2: @httplib2/maintainers Does anyone know if this is still relevant? |
<issue_start><issue_comment>Title: Internal Job IDs for Blender Jobs not unique
username_0: When building multiple blender jobs, each job is not receiving a unique ID according to log output.
<issue_comment>username_0: When the blender publisher was incorrectly handling multiple frames (i.e., creating multiple jobs instead of a single, multi-frame job), it was reusing the same Heedra::Blender::Job instantiation, which duplicated the same internal Job ID to every request.<issue_closed> |
<issue_start><issue_comment>Title: New registry header
username_0: Changed o-header with o-header-services
<issue_comment>username_1: Looks good, you can probably remove the header styles that are in a separate partial too.
<issue_comment>username_1: π
<issue_comment>username_1: Just noticed, that it's also missing including the JavaScript for the toggle. In the `public/main.js` you should be able to see how to import it. And can probably delete the old header JS partials too.
<issue_comment>username_1: Has been merged with #28 |
<issue_start><issue_comment>Title: Cannot read property 'length' of undefined on arrow function which does not include React
username_0: code
```
export default () => new Promise(resolve => {
window.addEventListener('XXX', () => {
resolve(window.xxx);
});
});
```
stack trace
```
ERROR in ./src/initializeApp2.js
Module build failed: TypeError: /project/src/initializeApp2.js: Cannot read property 'length' of undefined
at doesReturnJSX (/project/node_modules/babel-plugin-transform-react-stateless-component-name/lib/index.js:33:12)
at PluginPass.ExportDefaultDeclaration (/project/node_modules/babel-plugin-transform-react-stateless-component-name/lib/index.js:53:15)
at newFn (/project/node_modules/babel-traverse/lib/visitors.js:276:21)
at NodePath._call (/project/node_modules/babel-traverse/lib/path/context.js:76:18)
at NodePath.call (/project/node_modules/babel-traverse/lib/path/context.js:48:17)
at NodePath.visit (/project/node_modules/babel-traverse/lib/path/context.js:105:12)
at TraversalContext.visitQueue (/project/node_modules/babel-traverse/lib/context.js:150:16)
at TraversalContext.visitMultiple (/project/node_modules/babel-traverse/lib/context.js:103:17)
at TraversalContext.visit (/project/node_modules/babel-traverse/lib/context.js:190:19)
at Function.traverse.node (/project/node_modules/babel-traverse/lib/index.js:114:17)
at NodePath.visit (/project/node_modules/babel-traverse/lib/path/context.js:115:19)
at TraversalContext.visitQueue (/project/node_modules/babel-traverse/lib/context.js:150:16)
at TraversalContext.visitSingle (/project/node_modules/babel-traverse/lib/context.js:108:19)
at TraversalContext.visit (/project/node_modules/babel-traverse/lib/context.js:192:19)
at Function.traverse.node (/project/node_modules/babel-traverse/lib/index.js:114:17)
at traverse (/project/node_modules/babel-traverse/lib/index.js:79:12)
@ ./src/entry.js 11:26-57
```
environment
```
$ npm version
{ 'project': '1.0.0',
npm: '4.0.2',
ares: '1.10.1-DEV',
cldr: '30.0.2',
http_parser: '2.7.0',
icu: '58.1',
modules: '51',
node: '7.1.0',
openssl: '1.0.2j',
tz: '2016g',
unicode: '9.0',
uv: '1.10.0',
v8: '5.4.500.36',
zlib: '1.2.8' }
```
```
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
βββ [email protected]
```<issue_closed>
<issue_comment>username_1: Thanks for the report! I'll get a new version released with the fix included. |
<issue_start><issue_comment>Title: andCardinality and orCardinality could be faster/use less memory
username_0: Currently andCardinaly and orCardinality are implemented in such a way that intermediate containers are constructed and immediately discarded. This could be optimized significantly.
<issue_comment>username_0: First point above (Extend PR #57 to the "buffer" package.) has been resolved as of 0.6.11.
Issue remaining: address the orCardinality functions.<issue_closed>
<issue_comment>username_0: Resolved as per this commit :
https://github.com/RoaringBitmap/RoaringBitmap/commit/35864b42a64b726f78ed80c1f5bd024d2e97007e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.