content
stringlengths 240
2.34M
|
---|
<issue_start><issue_comment>Title: NSNotificationQueue
username_0: I always thought that the sentence I added and that I commented mean the same thing, but the latter one makes a better encapsulation. Are they?
<issue_comment>username_1: I think it's important that notification is broadcasted immediately. |
<issue_start><issue_comment>Title: Ability to use 'wrapper' property with 'fieldGroup's
username_0: The way my form is set up, this would be a huge help, as I plan on wrapping bootstrap accordions around the 'fieldGroup's I have.
<issue_comment>username_1: Can you provide a JSBin example (as explained in [CONTRIBUTING.md](https://github.com/formly-js/angular-formly/blob/master/CONTRIBUTING.md#reporting-bugs--requesting-features)) while using [template-wrappers](http://angular-formly.com/#/example/advanced/template-wrappers) along with [fieldGroups](http://docs.angular-formly.com/v6.4.0/docs/field-groups)?
<issue_comment>username_2: @username_0 You can achieve the same effect by having a type that has a template that contains a formly-form. This example shows how to achieve something similar http://angular-formly.com/#/example/advanced/repeating-section.
<issue_comment>username_0: Here's an example of what I'm trying to accomplish:
http://jsbin.com/gumodedito/1/edit?html,js,output
I apologize for not initially creating one, as jsbin is blocked on my company's network.
@username_2 After seeing the jsbin above, does making an additional formly-form still make sense? I am already needing to create several of them for other purposes as it is...so the simpler the better. If it's the only way to do it at this point, if you could shoot me a quick example of how I'd use accordions per new type (wrapping fields within the type with accordian-group), that would be helpful.
Thank you,
Jeff
<issue_comment>username_2: At this moment the feature doesn't exist. There have been several requests for but it has been put off due to the custom type method working in the interim. I will try to coming up with something. Essentially fieldGroup is an api that sits outside of the lifecycle of a field and as such does not have many of the features you would find in the field api.
https://github.com/formly-js/angular-formly/blob/0e400b65b86f02840ba045efbe5c21162c6b4c33/src/directives/formly-field.js#L36-L39<issue_closed>
<issue_comment>username_0: So this is essentially what I was trying to accomplish:
http://jsbin.com/butotacabi/1/edit?html,js,output
Thanks @username_2 for the reference to the 'repeating section' example which allowed me to work backwards to reach the conclusion linked above. I'm not sure if there was an easier way to apply a unique header title per section, but it's all that came to mind at the time.
<issue_comment>username_2: @username_0 Looks great. |
<issue_start><issue_comment>Title: 34_callback.rnn: create docments
username_0: Add documents to [34_callback.rnn](https://github.com/muellerzr/fastai-docment-sprint/blob/master/nbs/34_callback.rnn.ipynb).
See the style guide and contribution guide for more details (links will be added when created).
This requires a moderate familiarity, but will require explanation of AR and TAR regularization from a [paper](https://arxiv.org/pdf/1708.01009.pdf). This notebook defines callbacks for resetting and regularizing rnns.
Documents progress(only one section):
- [ ] Callback for RNN training |
<issue_start><issue_comment>Title: question: Is there any reason behind SNS DLQ message have different format than original AWS?
username_0: ### Is there an existing issue for this?
- [X] I have searched the existing issues and read the documentation
### Question
I'm trying to implement locally DLQ for SNS, the problem is, on localstack DLQ message coming from SNS have different format than what you would expect in AWS
When I tested locally I received the message in the following format:
"{\"Action\": [\"Publish\"], \"Message\": [\"(The original SNS message body here)\"], \"TopicArn\": [\"the topic arn in localstack\"], \"Version\": [\"2010-03-31\"]}"
I'm using local stack 0.14.0 with docker-compose
my docker-compose file looks like this:
```
aws:
image: localstack/localstack:0.14.0
restart: always
volumes:
- ./docker-mounts/localstack:/docker-entrypoint-initaws.d
environment:
- SERVICES=sqs,s3,ses,sns
- DEFAULT_REGION=eu-west-1
- HOSTNAME_EXTERNAL=aws.localtest.local
- AWS_ACCESS_KEY_ID=foo
- AWS_SECRET_ACCESS_KEY=bar
- AWS_DEFAULT_REGION=eu-west-1
```
### Anything else?
_No response_ |
<issue_start><issue_comment>Title: standalone: true doesn't work
username_0: throws Fatal error: Object true has no method 'toLowerCase'
```
browserify:
all:
options:
standalone: true
files:
'js/<%= pkg.name %>.browser.js': 'js/<%= pkg.name %>.js'
```
<issue_comment>username_1: I found this issue when searching to solve the same problem.
This is what worked for me:
browserify:
src: ['<%= pkg.name %>.js']
dest: './browser/dist/<%= pkg.name %>.standalone.js'
options:
browserifyOptions:
standalone: '<%= pkg.name %>' |
<issue_start><issue_comment>Title: Invoke-DbaDbDataMasking - Poor performance
username_0: ### Verified issue does not already exist?
Yes
### What error did you receive?
When having the HasUniqueIndex = true in the datamasking config file that triggers a creation of temptable where column values for columns to mask is generated. for a table of 200 000 rows it took 5 days to generate firstname and last name.
### Steps to Reproduce
```powershell
# provide your command(s) executed pertaining to dbatools
# please include variable values (redacted or fake if needed) for reference
```
PS D:\Applications\PowerShell\Scripts\DBATOOLS_commands> Invoke-DbaDbDataMasking -SqlInstance SASSOAPPTEST02 -Database testdb -FilePath
D:\Applications\PowerShell\_DataMasking\_Data2\SASSOAPPTEST01.testdb.DataMaskingConfig_Person_all.json -WhatIf
What if: Performing the operation "Masking 171423 row(s) for column [lastname, firstname] in testdb on target "SASSOAPPTES
T02".
### Are you running the latest release?
Yes
### Other details or mentions
I have chatted with @sqlstad about this issue
### What PowerShell host was used when producing this error
Windows PowerShell (powershell.exe)
### PowerShell Host Version
Name Value
---- -----
PSVersion 5.1.17763.2183
PSEdition Desktop
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
BuildVersion 10.0.17763.2183
CLRVersion 4.0.30319.42000
WSManStackVersion 3.0
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
### SQL Server Edition and Build number
SQL 2019 Dev Edtition:
Microsoft SQL Server 2019 (RTM-CU13) (KB5005679) - 15.0.4178.1 (X64)
Sep 23 2021 16:47:49
Copyright (C) 2019 Microsoft Corporation
Developer Edition (64-bit) on Windows Server 2019 Standard 10.0 <X64> (Build 17763: ) (Hypervisor)
### .NET Framework Version
PS D:\Applications\PowerShell\Script_Center> .\DetermineNetframeworkVersion.ps1
.NET Framework 4.8
<issue_comment>username_1: @username_2 - can you have a look at this? Thanks.
<issue_comment>username_2: It's on my radar and haven't had a chance yet |
<issue_start><issue_comment>Title: Failure response time not displaying in UI.
username_0: `locust.events.request_failure` takes `response_time` but does nothing with it.
In fact, the UI shows 0's even if a response time is reported with the `request_failure` event:

<issue_comment>username_1: I'm also having this issue. It makes me wonder if the failures that are also in the same line as some successful requests are being treated as if they have 0 response time. This would drastically skew data of the median, average, and min response times. Is there a fix to this?
<issue_comment>username_2: Yeah, I think we should include the response time of failures as long as we get an HTTP response from the server, but not for other failures such as socket, dns, or connection errors.
To achieve this we should make it so that *if* response_time is provided by the failure event, it'll be included in the stats. And then in the HTTP client make sure that we only provide response_time to the failure event when we've gotten an actual HTTP response.
<issue_comment>username_3: @username_2 Yeah I think that makes a lot of sense
<issue_comment>username_4: Hi. Any chance that this will be resolved? I am using following code to (manually) trigger request failures
```
request_failure.fire(
request_type='WebSocket Recv',
name='app/api/async',
response_time=666,
exception="Something failed",
)
```
but it still displays 0 instead 666 as response time :(
<issue_comment>username_5: 
Can't reproduce in current version.
<issue_comment>username_5: Maybe this was fixed a long time ago...<issue_closed> |
<issue_start><issue_comment>Title: Improve wallet documentation code sample
username_0: https://bitcoin-s.org/docs/next/wallet/wallet#example
We have this doozy in there
```scala
// once this future completes, we have a initialized
// wallet
val wallet = Wallet(keyManager, new NodeApi {
override def broadcastTransaction(tx: Transaction): Future[Unit] = Future.successful(())
override def downloadBlocks(blockHashes: Vector[DoubleSha256Digest]): Future[Unit] = Future.successful(())
}, new ChainQueryApi {
override def epochSecondToBlockHeight(time: Long): Future[Int] = Future.successful(0)
override def getBlockHeight(blockHash: DoubleSha256DigestBE): Future[Option[Int]] = Future.successful(None)
override def getBestBlockHash(): Future[DoubleSha256DigestBE] = Future.successful(DoubleSha256DigestBE.empty)
override def getNumberOfConfirmations(blockHashOpt: DoubleSha256DigestBE): Future[Option[Int]] = Future.successful(None)
override def getFilterCount: Future[Int] = Future.successful(0)
override def getHeightByBlockStamp(blockStamp: BlockStamp): Future[Int] = Future.successful(0)
override def getFiltersBetweenHeights(startHeight: Int, endHeight: Int): Future[Vector[FilterResponse]] = Future.successful(Vector.empty)
}, ConstantFeeRateProvider(SatoshisPerVirtualByte.one), creationTime = Instant.now)
val walletF: Future[WalletApi] = configF.flatMap { _ =>
Wallet.initialize(wallet,bip39PasswordOpt)
}
```<issue_closed> |
<issue_start><issue_comment>Title: 🛑 Monika 网盘 is down
username_0: In [`d2e2048`](https://github.com/username_0/status/commit/d2e20484fc93f7a01909d9f57f507b965b018908
), Monika 网盘 (https://cloud.monika.love) was **down**:
- HTTP code: 0
- Response time: 0 ms
<issue_comment>username_0: **Resolved:** Monika 下载服务 is back up in [`54352b3`](https://github.com/username_0/status/commit/54352b3bc9de95723110523cfc512857323fa5a9
).<issue_closed> |
<issue_start><issue_comment>Title: Implement totals API refactor changes to front end Flask app
username_0: When @username_2 reworks and breaks up the totals endpoint, implement changes in the front end Flask app.
Dependency:
Alison's API refactor for totals
<issue_comment>username_0: https://github.com/18F/openFEC/pull/449
<issue_comment>username_0: https://github.com/18F/openFEC-web-app/pull/65
<issue_comment>username_1: Pinging @username_2 or @LindsayYoung to take a look at this.
<issue_comment>username_2: On it.
<issue_comment>username_2: This was merged. Closing.<issue_closed> |
<issue_start><issue_comment>Title: Explicitly enable inline-templates
username_0: Was seeing "Application Error" a bunch on Heroku. Tried to run the app myself by using `Rack::Server.start(config: 'app.rb')` on my machine, and haml kept looking for actual file templates. This change explicitly tells Sinatra "yo, the templates are in this file, check 'em out"
<issue_comment>username_1: Thanks.
But I think this issue is duplicate #75. |
<issue_start><issue_comment>Title: Enabling pre-load of the Registry
username_0: Object uses an after BUILD modifier to trigger the population of the
registry and associated lookups. Use with a pre-fork server to
avoid nasty running server process startup costs caused by reaping
processes.
<issue_comment>username_0: Please check this
<issue_comment>username_1: Interesting. I may add a patch to load the Compara basic objects too
<issue_comment>username_0: Ok sounds good. I assume you're talking about GenomeDBAdaptor & MLSS?
>
<issue_comment>username_1: I am
<issue_comment>username_2: should those changes also go into the production ensembl_rest.conf.default? |
<issue_start><issue_comment>Title: Sqlite import sql script is very slow
username_0: ### Feature Description
When I use cmd [ sqlite3 gitea.db < gitea-db.sql ] to restore the Sqlite database from gitea-db.sql , It takes about 5-6 minutes, unbearably slow, and the data that I recovered was only a few hundred records .
Here's my solution,add two command sets at the beginning and end of the gitea-db.sql file:
----------------------------------
PRAGMA journal_mode = MEMORY;
PRAGMA synchronous = OFF;
PRAGMA foreign_keys = OFF;
PRAGMA ignore_check_constraints = OFF;
PRAGMA auto_vacuum = NONE;
PRAGMA secure_delete = OFF;
BEGIN TRANSACTION;
<gitea-db.sql CONTENT>
COMMIT;
PRAGMA ignore_check_constraints = ON;
PRAGMA foreign_keys = ON;
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
---------------------------------------
After the above modification, run same again, less than 1 second, very fast.
I think install Gitea with SQLite very slow to initialize is the same reason.
### Screenshots
_No response_ |
<issue_start><issue_comment>Title: Master server crashed
username_0: As the title suggested one of the master servers crashed; have three and fall over happened as expected, which is good:-)
Here is the log of the crash:
```
Mar 21 21:49:49 jeannie weed[998710]: I0321 21:49:49 98710 topology_event_handling.go:65] Removing Volume 33 from the dead volume server 192.168.1.125:9380
Mar 21 21:49:49 jeannie weed[998710]: I0321 21:49:49 98710 topology_event_handling.go:65] Removing Volume 16 from the dead volume server 192.168.1.125:9380
Mar 21 21:49:49 jeannie weed[998710]: I0321 21:49:49 98710 topology_event_handling.go:65] Removing Volume 139 from the dead volume server 192.168.1.125:9380
Mar 21 21:49:49 jeannie weed[998710]: I0321 21:49:49 98710 master_grpc_server.go:29] unregister disconnected volume server 192.168.1.125:9380
Mar 21 21:49:49 jeannie weed[998710]: panic: runtime error: invalid memory address or nil pointer dereference
Mar 21 21:49:49 jeannie weed[998710]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x70 pc=0x18d67a6]
Mar 21 21:49:49 jeannie weed[998710]: goroutine 499953 [running]:
Mar 21 21:49:49 jeannie weed[998710]: github.com/username_1/seaweedfs/weed/topology.(*DataNode).GetDataCenter(0xc0016a6640, 0xc0016d2df0)
Mar 21 21:49:49 jeannie weed[998710]: /home/travis/gopath/src/github.com/username_1/seaweedfs/weed/topology/data_node.go:180 +0x26
Mar 21 21:49:49 jeannie weed[998710]: github.com/username_1/seaweedfs/weed/server.(*MasterServer).SendHeartbeat(0xc00051f200, 0x2629cd8, 0xc0010400e0, 0x0, 0x0)
Mar 21 21:49:49 jeannie weed[998710]: /home/travis/gopath/src/github.com/username_1/seaweedfs/weed/server/master_grpc_server.go:86 +0x12a5
Mar 21 21:49:49 jeannie weed[998710]: github.com/username_1/seaweedfs/weed/pb/master_pb._Seaweed_SendHeartbeat_Handler(0x21aa420, 0xc00051f200, 0x2622868, 0xc00013e300, 0x35168e0, 0xc0011fa000)
Mar 21 21:49:49 jeannie weed[998710]: /home/travis/gopath/src/github.com/username_1/seaweedfs/weed/pb/master_pb/master.pb.go:4341 +0xad
Mar 21 21:49:49 jeannie weed[998710]: google.golang.org/grpc.(*Server).processStreamingRPC(0xc000496000, 0x262c7b8, 0xc000b3c180, 0xc0011fa000, 0xc00034da10, 0x34c1980, 0x0, 0x0, 0x0)
Mar 21 21:49:49 jeannie weed[998710]: /home/travis/gopath/pkg/mod/google.golang.org/[email protected]/server.go:1329 +0xcd8
Mar 21 21:49:49 jeannie weed[998710]: google.golang.org/grpc.(*Server).handleStream(0xc000496000, 0x262c7b8, 0xc000b3c180, 0xc0011fa000, 0x0)
Mar 21 21:49:49 jeannie weed[998710]: /home/travis/gopath/pkg/mod/google.golang.org/[email protected]/server.go:1409 +0xc68
Mar 21 21:49:49 jeannie weed[998710]: google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00070c1e0, 0xc000496000, 0x262c7b8, 0xc000b3c180, 0xc0011fa000)
Mar 21 21:49:49 jeannie weed[998710]: /home/travis/gopath/pkg/mod/google.golang.org/[email protected]/server.go:746 +0xab
Mar 21 21:49:49 jeannie weed[998710]: created by google.golang.org/grpc.(*Server).serveStreams.func1
Mar 21 21:49:49 jeannie weed[998710]: /home/travis/gopath/pkg/mod/google.golang.org/[email protected]/server.go:744 +0xa5
Mar 21 21:49:49 jeannie systemd[1]: seaweed-master.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Mar 21 21:49:49 jeannie systemd[1]: seaweed-master.service: Failed with result 'exit-code'.
```
there are many more of the removing volume events, i.e. all of the volumes on that server.
Setup is 6 nodes, with security fully setup, version 2.34
Three masters, started with:
```
weed -logdir=/var/log/seaweed/master master -mdir=/var/lib/seaweedfs -ip=192.168.1.119 -defaultReplication=001 -peers=192.168.1.119:9333,192.168.1.125:9333,192.168.1.117:9333
```
Six volume servers started with
```
weed -logdir=/var/log/seaweed/volume volume -port=9380 -dir=/mnt/seaweed001/data,/mnt/seaweed002/data,/mnt/seaweed003/data,/mnt/seaweed004/data -max=0,0,0,0 -ip=192.168.1.125 -mserver=192.168.1.119:9333,192.168.1.125:9333,192.168.1.117:9333 -dataCenter=1W251 -rack=Rack1
```
Then each node also has a filer and mount on it as well
```
weed -logdir=/var/log/seaweed/filer filer -ip=192.168.1.119 -port=8888 -master=192.168.1.119:9333,192.168.1.125:9333,192.168.1.117:9333 -disableHttp -peers=192.168.1.115:8888,192.168.1.117:8888,192.168.1.118:8888,192.168.1.119:8888,192.168.1.121:8888,192.168.1.123:8888,192.168.1.125:8888
weed -logdir=/var/log/seaweed/mount mount -memprofile=/var/log/seaweed/mount/memprofile -filer=192.168.1.119:8888 -dir=/mnt/seaweed
```
No other errors at the time of the crash, except the volumes all jumped to the replacement master. I am seeing a lot of java Web Token errors on the servers (well, maybe one per hour), but not at the same time so probably unrelated (if worrying, but currently backing up another server to this system and get the impression this might be timeouts caused by load), e.g. on a mount this looks like
```
Mar 21 22:33:11 bigchin weed[1475578]: W0321 22:33:11 75578 upload_content.go:103] uploading to http://192.168.1.119:9380/79,01c0514f2ebc5ec1: unmarshalled error http://192.168.1.119:9380/79,01c0514f2ebc5ec1: wrong jwt
Mar 21 22:33:11 bigchin weed[1475578]: W0321 22:33:11 75578 upload_content.go:103] uploading to http://192.168.1.119:9380/79,01c0514f2ebc5ec1: unmarshalled error http://192.168.1.119:9380/79,01c0514f2ebc5ec1: wrong jwt
Mar 21 22:33:12 bigchin weed[1475578]: W0321 22:33:12 75578 upload_content.go:103] uploading to http://192.168.1.119:9380/79,01c0514f2ebc5ec1: unmarshalled error http://192.168.1.119:9380/79,01c0514f2ebc5ec1: wrong jwt
Mar 21 22:33:12 bigchin weed[1475578]: I0321 22:33:12 75578 wfs_write.go:62] upload data .immutable_collections.cpython-38.pyc.CCv6vA to http://192.168.1.119:9380/79,01c0514f2ebc5ec1: unmarshalled error http://192.168.1.119:9380/79,01c0514f2ebc5ec1: wrong jwt
Mar 21 22:33:12 bigchin weed[1475578]: I0321 22:33:12 75578 dirty_page.go:99] /backup_garlick_home_2021-3/<username>/anaconda3/envs/monodepth/lib/python3.8/site-packages/torch/fx/__pycache__/.immutable_collections.cpython-38.pyc.CCv6vA saveToStorage [0,1372): upload data: unmarshalled error http://192.168.1.119:9380/79,01c0514f2ebc5ec1: wrong jwt
Mar 21 22:33:12 bigchin weed[1475578]: E0321 22:33:12 75578 filehandle.go:247] /backup_garlick_home_2021-3/<username>/anaconda3/envs/monodepth/lib/python3.8/site-packages/torch/fx/__pycache__/.immutable_collections.cpython-38.pyc.CCv6vA doFlush last err: upload data: unmarshalled error http://192.168.1.119:9380/79,01c0514f2ebc5ec1: wrong jwt
Mar 21 22:33:12 bigchin weed[1475578]: E0321 22:33:12 75578 filehandle.go:229] Flush doFlush .immutable_collections.cpython-38.pyc.CCv6vA: input/output error
```
Happy to grab more information, but I'm assuming you're probably not going to wade through the logs of 21 processes!<issue_closed>
<issue_comment>username_1: Thanks for all the details! This has been reported before but lacked enough details. |
<issue_start><issue_comment>Title: Translate file "octane.md"
username_0: Notes for contributors, the file can be translated by being aware of and understanding the file you are translating, you must have an understanding of the techniques and the explanation provided by the documentation file, and it must be in Standard Arabic, and the technical terms should be translated as follows:
For example, المتحكمات with the term next to it in English (Controllers)
For example, التهجير and next to it the term in English (Migrations)
And so on
Thank you for your contribution |
<issue_start><issue_comment>Title: Grouping troubles
username_0: Kairos! All right so I'm running your 0.9.3 on localhost:28080 and I send this as the body of a POST request to http://localhost:28080/api/v1/datapoints:
```
[
{
name: 'kairos-test-metric',
timestamp: 1359786400000,
value: 17,
tags: {
tagname1: 'x'
}
},
{
name: 'kairos-test-metric',
timestamp: 1359786500000,
value: 71,
tags: {
tagname2: 'x'
}
}
]
```
And the response comes back 204 all good. And then I send this as the body of a POST request to http://localhost:28080/api/v1/datapoints/query:
```
{
start_absolute: 0,
metrics: [
{
name: 'kairos-test-metric',
group_by: [{name: 'tag', tags: ['tagname1', 'tagname2']}]
}
]
}
```
And then what I get back is this:
```
{"queries":
[{"sample_size":2,
"results":[
{
"name":"kairos-test-metric",
"group_by":[{"name":"tag",
"tags":["tagname1","tagname2"],
"group":{"tagname1":"x","tagname2":""}}],
"tags":{"tagname1":["x"],"tagname2":["x"]},
"values":[[1359786400000,17],[1359786500000,71]]
}
]
}]
}
```
which I don't understand. Based on http://code.google.com/p/kairosdb/wiki/TagGrouping, I'm expecting something more like this:
```
{"queries":
[{"sample_size":2,
"results":[
{
"name":"kairos-test-metric",
"group_by":[{"name":"tag",
"tags":["tagname1","tagname2"],
"group":{"tagname1":"","tagname2":"x"}}],
"tags":{"tagname2":["x"]},
"values":[[1359786500000,71]]
},
{
"name":"kairos-test-metric",
"group_by":[{"name":"tag",
"tags":["tagname1","tagname2"],
"group":{"tagname1":"x","tagname2":""}}],
"tags":{"tagname1":["x"]},
"values":[[1359786400000,17]]
}
]
}]
}
```
Are you broken Kairos or do I have Moronic User Error? You can tell me. Thanks!
<issue_comment>username_1: Hi. I am facing this issue while calling REST API from python script. Was this resolved?
I am using kairosdb-0.9.4-6. |
<issue_start><issue_comment>Title: VIH-6762 Seed data for notify templates
username_0: ### JIRA link (if applicable) ###
https://tools.hmcts.net/jira/browse/VIH-6762
### Change description ###
Seed data for notify templates
**Does this PR introduce a breaking change?** (check one with "x")
```
[ ] Yes
[ ] No
``` |
<issue_start><issue_comment>Title: Implicit declaration of function ‘umask’
username_0: https://github.com/username_0/port-mirroring/blob/07f74e1f24cb722d337de244f9e60002bee35bab/port-mirroring.c#L974
```
port-mirroring.c: In function ‘fork_daemon’:
port-mirroring.c:974:5: warning: implicit declaration of function ‘umask’ [-Wimplicit-function-declaration]
umask(0);
^
```<issue_closed> |
<issue_start><issue_comment>Title: Problems regarding to evaluate on MOT test set
username_0: Hi,
Thank you for sharing this nice work.
I tried to run the evaluation part, however I was not able to do so as the gt.txt files are missing for the test set.
FileNotFoundError: [Errno 2] No such file or directory: '/home/username_0/mot_neural_solver/data/MOT_eval_gt/TUD-Crossing/gt/gt.txt'
I tried to search on the repository and also on the MOT website but I coundn't find the gt file for test set as well.
Can you help in this case?
Thanks
<issue_comment>username_1: Hi,
Thanks for your comment :)
The sequence 'TUD-Crossing' belongs to the MOT15 Challenge test set. You cannot download ground truth annotations for any test sequence for any of the MOTChallenge datasets. Evaluation of test sequences is performed in the MOTChallenge server: you have to submit your results to the website, and you will get back your performance metrics.<issue_closed> |
<issue_start><issue_comment>Title: Make baucis-json-strict the default for v2.0.0
username_0: Publish it as baucis-json v2.0.0<issue_closed>
<issue_comment>username_1: +1
<issue_comment>username_0: @username_1 Fyi you can use baucis-json-strict now, when using v1.x of baucis. That's probably obvious, but thought I might as well mention it.
<issue_comment>username_1: Superb! :-) thanks for it.
We will benefict from it asap!
-----Mensaje original----- |
<issue_start><issue_comment>username_0: @username_1 r+
<issue_comment>username_1: :pushpin: Commit 91aac0ccaedb8a4f9d644cf03d5ec89d14c9487b has been approved by `username_0`
<!-- @username_1 r=username_0 91aac0ccaedb8a4f9d644cf03d5ec89d14c9487b -->
<!-- homu: {"type":"Approved","sha":"91aac0ccaedb8a4f9d644cf03d5ec89d14c9487b","approver":"username_0"} -->
<issue_comment>username_1: :hourglass: Testing commit 91aac0ccaedb8a4f9d644cf03d5ec89d14c9487b with merge 52f4f3e898b1f62669c1193030f5d7468e6fbf48...
<!-- homu: {"type":"BuildStarted","head_sha":"91aac0ccaedb8a4f9d644cf03d5ec89d14c9487b","merge_sha":"52f4f3e898b1f62669c1193030f5d7468e6fbf48"} -->
<issue_comment>username_1: :sunny: Test successful - [checks-actions](https://github.com/rust-lang/crates.io/runs/5635312612?check_suite_focus=true)
Approved by: username_0
Pushing 52f4f3e898b1f62669c1193030f5d7468e6fbf48 to master...
<!-- homu: {"type":"BuildCompleted","approved_by":"username_0","base_ref":"master","builders":{"checks-actions":"https://github.com/rust-lang/crates.io/runs/5635312612?check_suite_focus=true"},"merge_sha":"52f4f3e898b1f62669c1193030f5d7468e6fbf48"} --> |
<issue_start><issue_comment>Title: Androidx dependency notations big update
username_0: This PR adds KDoc for AndroidX dependency notations, and new dependency notations.
<issue_comment>username_1: My god!
<issue_comment>username_0: I'm barely halfway. I'll be very grateful if you (or anyone) can help me in a pairing session where we can type faster and review together.
<issue_comment>username_0: 109 one left to document out of 320 (not including some that I removed along the way) 🥲
<issue_comment>username_2: @username_0 I'd love to help out but I'm not sure how useful it will be since I'm fairly inexperienced. I had to update some of the AndroidX dependencies we use at work recently so I am kind of familiar with where the release notes are. Do you have conventions/templates in place? Perhaps you could give me a brief overview of what you are trying to accomplish?
<issue_comment>username_0: @username_2 If you look at the changes in the `AndroidX.kt` from this PR, you'll see what I'm doing.
No need for special skills or experience apart from being very thorough and noticing special cases.
If you're up to help on that through a pairing session, reach out via Kotlin Slack DM, Twitter DM, or `@louiscad` through Telegram so we find a time slot that suits both of us.
<issue_comment>username_0: 72 left, 35 of which are into the `AndroidX.compose` family. This is out of a total of 323 so far.
That less than 23% left \o/
<issue_comment>username_0: 37 left out of 321!
<issue_comment>username_0: Done! 🎉🥳🙏🎉🎉🎉🎉🎉🎉🎉
Thank you so much @username_2!
I'm so happy it's now complete.
Just in time for Christmas release! 🎁 |
<issue_start><issue_comment>Title: Use an updated version of the s3cli in stemcell_builder.
username_0: We will now download a binary (version 0.0.11) that is promoted from the
s3cli pipeline. This commit introduces an S3 client that is able to
get/pull from AWS China S3.
[#105488550](https://www.pivotaltracker.com/story/show/105488550) |
<issue_start><issue_comment>Title: Automatically create .gitignore
username_0: **Is your feature request related to a problem? Please describe.**
.gitignore not created in new repos using bootstrapping
**Describe the solution you'd like**
create .gitignore most helix-services have same structure like a node_modules directory etc etc.
Don't want to have to reinvent the wheel on listing things we don't want in our git commits.
**Describe alternatives you've considered**
copy and paste .gitignore from another repo<issue_closed> |
<issue_start><issue_comment>Title: Make duration field optional on application forms
username_0:
<issue_comment>username_1: I tested, but the concept note won't save https://test-apply.opentech.fund/admin/funds/applicationform/edit/23/
<img width="1046" alt="Screen Shot 2021-04-12 at 7 25 01 PM" src="https://user-images.githubusercontent.com/20019656/114435730-c8d17e80-9bc4-11eb-8111-fac0ec130b73.png">
<issue_comment>username_0: Thanks for the report, must have missed a part of the code. Will fix that and then we can retest.
<issue_comment>username_0: @username_1 Please try on the new Hypha test site instead. It has the test, content and the same user accounts etc.
https://test-apply.hypha.app/admin/funds/applicationform/edit/23/
I will specify the test address clearly moving forward.
<issue_comment>username_1: @username_0 Thank you, please release it. I no longer have the administrative access to change the labels.<issue_closed>
<issue_comment>username_0: Right now the Wagtail system enforces the "project duration" field to be incuded (along with application title, applicant name, and applicant email address).
If a user tried to create a form, and does not add this field, their work cannot be saved.
It is possible that the way the project duration question is asked in the future will need to be changed, so we're making this field no longer a Wagatil system enforced required field.
<issue_comment>username_0: This is now on test again. |
<issue_start><issue_comment>Title: GKE: Cannot connect from container to subnet connected via VPN
username_0: I have a gce network connected to an aws network over a gcp vpn. I can connect from a kubernetes node to a box in aws but I can't connect from inside a container. Traffic seems to stop at the container bridge. The cluster was created using Google Container Engine(GKE).
<issue_comment>username_1: @username_3
Hello, I'm facing the same problem here. Pod cannot reach IP address behind VPN.
But `iptables -t nat -A POSTROUTING -s 10.192.1.0/24 ! -o cbr0 ! -d 10.192.0.0/14 -j MASQUERADE` this rule work. Is it safe to use it? Or there is a better solution for this problem?
<issue_comment>username_2: @username_3 was this ever triaged internally?
<issue_comment>username_3: This should be fixed with the current GCE VPN product, but I have not manually verified it.
<issue_comment>username_4: I encountered this problem today using the current VPN product (with BGP no less). I found that a specific NAT rule in iptables was a bit overzealous:
-A POSTROUTING ! -d 10.0.0.0/8 -m comment --comment "kubenet: SNAT for outbound traffic from cluster" -m addrtype ! --dst-type LOCAL -j MASQUERADE
That basically says NAT anything that's not going to 10.0.0.0/8. However, my cluster IP space is only 10.48.0.0/14.
I changed the rule to this:
-A POSTROUTING ! -d 10.48.0.0/14 -m comment --comment "kubenet: SNAT for outbound traffic from cluster" -m addrtype ! --dst-type LOCAL -j MASQUERADE
And at least so far this is happier. I believe that I have seen:
- pods talk to each other across hosts
- pods talk to resources on my corporate network (across the VPN)
- pods talk to resources on the Internet
<issue_comment>username_5: If you do that, then pod IPs are lost on traffic that goes to your own
network but isn't in your kube cluster. Basically that rule is designed to
appeas the one-to-one edge NAT, which requires the source IP to be the VM's.
Are you saying that GCE's VPN is dropping pod traffic?
<issue_comment>username_4: I want the pods to get NATed before going back to my corporate network. Not
sure about everyone else but I'm unwilling to take an entire /14 and
dedicate it to a single k8s cluster (there are only 64 /14s in 10/8) and so
I do not route it anywhere. It's an island.
If we had ipv6 (we do, a /48 in total) my opinion might be different.
On Aug 30, 2016 9:48 PM, "username_3-cc" <[email protected]> wrote:
> If you do that, then pod IPs are lost on traffic that goes to your own
> network but isn't in your kube cluster. Basically that rule is designed to
> appeas the one-to-one edge NAT, which requires the source IP to be the
> VM's.
>
> Are you saying that GCE's VPN is dropping pod traffic?
>
>
<issue_comment>username_2: Just to confirm what worked for me:
Setting up a /16 subnet in GCP and deploying a kube cluster to it with it's own /16.
10.10.0.0/16 - subnet
10.11.0.0/16 - kube cluster ip range
Then adding 10.10.0.0/15 to the routing table on the AWS side of the VPN.
<issue_comment>username_3: On Tue, Aug 30, 2016 at 10:00 PM, Bob Van Zant <[email protected]> wrote:
>
> I want the pods to get NATed before going back to my corporate network. Not
> sure about everyone else but I'm unwilling to take an entire /14 and
> dedicate it to a single k8s cluster (there are only 64 /14s in 10/8) and so
> I do not route it anywhere. It's an island.
In that case, you need to manage the additional masquerade rule
yourself. You can write a pretty trivial DaemonSet that simply
ensures that the iptables rule you want is installed. It will run on
every machine. We've used this pattern for other "tweak the system"
sorts of things.
I can help you craft that DaemonSet if you need, but it sounds like
you have a good sense of what to do. Just run privileged and
hostNetwork :)
> If we had ipv6 (we do, a /48 in total) my opinion might be different.
>
> On Aug 30, 2016 9:48 PM, "username_3-cc" <[email protected]> wrote:
>
> > If you do that, then pod IPs are lost on traffic that goes to your own
> > network but isn't in your kube cluster. Basically that rule is designed to
> > appeas the one-to-one edge NAT, which requires the source IP to be the
> > VM's.
> >
> > Are you saying that GCE's VPN is dropping pod traffic?
> >
> >
<issue_comment>username_6: I ran into this too as corporate subnets are within 10.0.0.0/8 here. Masquerading of Pod's IPs worked here (and still work on some nodes) because of two additional NAT rules of which I do not know where they come from:
```
-A POSTROUTING -d 10.192.52.0/22 -o eth0 -j MASQUERADE
-A POSTROUTING -d 10.0.0.0/24 -o eth0 -j MASQUERADE
```
Routes to those networks are defined as GCE routes with CloudVPN tunnels as next hop.
@username_3 how would you do a "one-shot" DaemonSet to add/change some iptables rules? Just run privileged and with hostNetwork, run `iptabels ...` and than `sleep infinity`?
<issue_comment>username_7: @mikedanese has documentation about how to run a daemonset to do node configuration in https://github.com/kubernetes/contrib/pull/892
<issue_comment>username_3: I wouldn't sleep inf, I would actually write it to check that the rules you
need are present, and if not add them, then sleep for 1 minute and repeat.
Self-healing FTW
<issue_comment>username_8: Is using the daemonset the recommended/only way to move forward with this?
<issue_comment>username_3: There's a flag that controls what single range gets not-masqueraded, but in
GKE flags don't persist well. A DS is your best bet
<issue_comment>username_8: Ok thank you. How do recommend a way to get the cluster IP programmatically to pass to the `iptables` command in the DS?
To be clear, for the container address range 10.68.0.0/14, I am running `sudo iptables -t nat -A POSTROUTING ! -d 10.68.0.0/14 -o eth0 -j MASQUERADE`, which then gives me connectivity.
<issue_comment>username_7: The GKE API will provide you the cluster CIDR. You can use gcloud or a raw API call to get it. I'd recommend making it a parameter on your DS and fetching/configuring it once rather than letting each pod do it dynamically for a couple of reasons:
1. The value never changes over the life of the cluster so it's unnecessary to keep asking for it (and then implement proper retry / failure handling)
1. It would require the cloud platform scope on your VMs which you may not otherwise need.
<issue_comment>username_9: Hi guys, i've the same issue too.
The following command was required in order to get connectivity through the VPN on GKE.
```
iptables -t nat -A POSTROUTING ! -d 10.184.1.0/24 -o eth0 -j MASQUERADE
```
Ideally it would be configured at node startup, for instance whenever i scale up the k8s cluster.
How would you do that please ?
<issue_comment>username_9: Hi !
I faced the same issue.
The following command was required in order to get my pods connects through the VPN on GKE.
```
iptables -t nat -A POSTROUTING ! -d 10.184.1.0/24 -o eth0 -j MASQUERADE
```
I would like to know how i can automate this task at node startup, for instance when i scale up the k8s cluster.
What's the best way to achieve that please?
<issue_comment>username_10: Hi !
I faced the same issue.
The following command was required in order to get my pods connects through the VPN on GKE.
iptables -t nat -A POSTROUTING ! -d 10.184.1.0/24 -o eth0 -j MASQUERADE
I would like to know how i can automate this task at node startup, for instance when i scale up the k8s cluster.
What's the best way to achieve that please?
<issue_comment>username_8: @username_7 what is the raw API call to get it?
I attempted the following methods by `gcloud compute ssh _node_` first to test, but I am stuck:
- Running `gcloud container clusters describe test-proxy --format 'value(clusterIpv4Cidr)'` gives `ERROR: (gcloud.container.clusters.describe) ResponseError: code=403, message=Request had insufficient authentication scopes.`. And on the new GCI image VMs, `gcloud` is not installed.
- `kubectl cluster-info dump` or `kubectl proxy` to try to parse the output yields `The connection to the server localhost:8080 was refused - did you specify the right host or port?`. How can I determine what port to set to connect to Kubernetes?
<issue_comment>username_7: Since you are running this from a node you'll need to pass kubectl a proper kubeconfig file. The easiest way is to impersonate the kubelet -- copy `/var/lib/kubelet/kubeconfig` and add in the address of the apiserver (can be found by running `ps -elf | grep kubelet` and looking for the `--api-servers` flag). Then pass that file to kubectl as a flag (`--kubeconfig`).
<issue_comment>username_8: @username_7 no worries, I appreciate your response!
I was able to get the docker image method to run `gcloud` commands!
```
ZONE=$(curl -s -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/zone | cut -d/ -f4)
docker run -it google/cloud-sdk:latest gcloud container clusters describe test-proxy --format 'value(clusterIpv4Cidr)' --zone $ZONE
10.72.0.0/14
```
I avoided the [apt-get method](https://cloud.google.com/sdk/downloads#apt-get), in case apt-get becomes no longer available.
Thank you for providing details on the config steps to get `kubectl` running, I was able to use `kubectl` via the described method in case someone else wants to access `kubectl` on GCI. This method doesn't seem very "automatable". I am able to parse `kubectl cluster-info dump` after manually editing the `kubeconfig` file though:
```
kubectl --kubeconfig kubeconfig cluster-info dump | grep -oE "cluster-cidr=([0-9]{1,3}\.){3}[0-9]{1,3}\/([1-2][0-9]|3[0-2]|[0-9])\b" | cut -d'=' -f2 | uniq
```
Some questions:
1. are there plans to include gcloud on GCI images? It seems like a sane default/include?
1. would it be sensible to ask for a GCE instance's `--zone` to be available as an environment variable by default?
1. what are your thoughts on including the cluster-cidr as an environment variable in a pod's containers?
3.
<issue_comment>username_7: You could do this yourself if you wanted (although it might require some pre-processing of yaml/json files to customize per cluster).
We already inject a bunch of environment variables automatically so I suppose it would be possible, but we'd want to be careful about how we do it. We are also talking about making a cluster config as part of cluster bootstrapping in the cluster lifecycle SIG, so that could be another well known place to grab the value from (e.g. a well known config map or API object).
As an aside, I expect that the cluster-cidr will end up turning into cidrs (plural) as we will want to add functionality for a cluster to grow into non-contiguous IP space. So we'd need to be careful to make the mechanism of producing the cidr allow for expansion in the future.
<issue_comment>username_3: a) We don't want to inject more automatic variables. We should prefer to
publish the configuration via a ConfigMap and let users ingest it if they
want it.
b) Please don't assume a single CIDR for the whole cluster. It's a bad
assumption, and it will break in the future.
<issue_comment>username_8: Thanks @username_3. I believe we are going to change it so that it only NATs the traffic heading for the VPN so we won't need to find the cluster CIDR(s).
<issue_comment>username_8: For those who have implemented this, has anyone encountered MTU issues with the VPN + GCP L4 internal load balancers + GKE setup?
We need to lower the MTU on the GKE nodes (running GCI) to 1400, but there doesn't seem to be a good way to do this. We can't do this from the running containers (because app teams use their own containers and shouldn't be concerned with MTU settings). We don't want to override kubelet options (because overriding platform components seem like a bad idea™)
<issue_comment>username_11: @username_8 hi Tony, just curious if you end up implemented the DS way of adding iptables for SNATing the pod IP to node IP on GKE. thanks.
<issue_comment>username_11: I use the same DS as specified in https://blog.mrtrustor.net/post/iptables-kubernetes/. It works.
<issue_comment>username_12: I am facing the same issue trying to connect to some VMs on azure from GKE via VPN.
Can someone clarify why the MASQUERADE is required in the first place? Shouldn't declaring a route on azure side for the podCidr be enough?
<issue_comment>username_3: As long as you have appropriate routes set up on both ends, and that
includes the pod IP space (not just VM space) VPN should work.
<issue_comment>username_11: @username_12 If you don't add MASQUERADE (i.e. SNAT), the VPN won't allow the egress traffic from GKE to azure. Or https://cloud.google.com/container-engine/docs/ip-aliases#using_an_existing_subnetwork_with_secondary_ranges can come to rescue
<issue_comment>username_13: *Update for those still coming here for solutions.*
Nowadays, the proper way to achieve this is to deploy kubernetes' IP Masquerade Agent and configure it not to masquerade IP addresses only for really local traffic and not the entire RFC1918 CIDRs.
That way traffic to other destinations in private networks is masqueraded properly and accepted by VPN gateways.
Example `ip-masq-agent` ConfigMap:
```
nonMasqueradeCIDRs:
- 10.184.0.0/14 # The IPv4 CIDR the cluster is using for Pods (required)
- 172.16.32.0/20 # The IPv4 CIDR of the subnetwork the cluster is using for Nodes (optional, works without but I guess its better with it)
masqLinkLocal: false
resyncInterval: 60s
```
Note: the `ip-masq-agent` is installed by default since version 1.7.0 with Network Policy enabled.
It is also supposed to be installed by default when using a cluster CIDR not in the 10.0.0.0/8 range but was not true for some 1.8.5 clusters I checked where Network Policy is disabled.
For reference, see https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent
This eliminates the need to use a startup-script to setup the iptables rules.
<issue_comment>username_14: Thanks this was exactly what I needed to make things work. It was very hard to understand this from documentation. It also tool some time for the config to apply. Any hints on how to reach the Nodes network (services) from the on prem side? A static route did not seem to be enough. |
<issue_start><issue_comment>Title: F-Droid can't build
username_0: `ERROR: Found unknown maven repo 'https://dl.bintray.com/ooni/android/' at build.gradle`
These libs are not available [anywhere else](https://gitlab.com/fdroid/fdroidserver/blob/master/fdroidserver/scanner.py#L100)? Then maybe we'll need to build them...
<issue_comment>username_1: I tried to make them available to Bintray JCenter and the process failed saying the POM file format was not valid. I did not bother with trying again because that repo was working for us.
```XML
<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>
<groupId>org.ooni</groupId>
<artifactId>oonimkall</artifactId>
<version>2020.05.14-133636</version>
<packaging>aar</packaging>
</project>
```
[This stackoverflow answer](https://stackoverflow.com/questions/52459232/invalid-pom-project-file-when-sending-request-to-include-bintray-package-in-jcen) highlights some of the reasons why JCEnter could reject a POM. So, maybe I can try and add them, and retry to make the package available through JCenter. (In the past, a POM like that was okay, so I believe they may have made their rules a bit more strict.)
Thanks for letting us know.
<issue_comment>username_1: Made some progress by making the JCenter like our POM: https://github.com/ooni/probe-engine/pull/667.
<issue_comment>username_1: We should be able to fix this within the next two weeks. Now the package we require has also been added to JCenter. We will fix the `build.gradle` in https://github.com/ooni/probe/issues/1191.
<issue_comment>username_1: @username_0 AFAICT:
1. the gitlab pipeline [is now building probe-android](https://gitlab.com/fdroid/fdroiddata/-/commit/8c4c05d22b98f98c208e03fc6a741d249c839b8b)
2. my understanding is that version 2.5.0 is now supported by f-droid
3. yet, [the latest available version is 2.3.2](https://f-droid.org/en/packages/org.openobservatory.ooniprobe/)
Is there anything else we should do? Thanks!
<issue_comment>username_0: Yeah, how to fix this now https://f-droid.org/wiki/index.php?title=org.openobservatory.ooniprobe/lastbuild_61&oldid=271360 ? :)
<issue_comment>username_1: So, maybe stupid question: where do I go straight to that link from https://f-droid.org/wiki/page/org.openobservatory.ooniprobe? Regarding the building failure, I'll route the question to @username_2, who knows better than me these details. Thanks for replying so quickly!
<issue_comment>username_0: The Wiki is obsolete since Dec 2018, so do not use that page.
Bookmark, current cycle: https://f-droid.org/wiki/index.php?title=Special:RecentChanges&days=7&from=&hidebots=0&hideanons=1&hideliu=1&limit=500
Your app: https://f-droid.org/wiki/index.php?title=org.openobservatory.ooniprobe/lastbuild or lastbuild_VERSIONCODE
<issue_comment>username_1: @username_0 I opened a merge request that we think should fix the build: https://gitlab.com/fdroid/fdroiddata/-/merge_requests/7041
I am not sure however, whether this is enough to fix the problem in general, or whether changes need also to be implemented in other places.
<issue_comment>username_2: This was caused from the fact that we added more build flavours and now the `fdroid` build is called `stableFdroid`
<img width="206" alt="Screenshot 2020-07-02 at 13 20 26" src="https://user-images.githubusercontent.com/1616097/86356746-0fc13980-bc6d-11ea-9d69-1bf20cd9ed80.png"><issue_closed>
<issue_comment>username_1: `ERROR: Found unknown maven repo 'https://dl.bintray.com/ooni/android/' at build.gradle`
These libs are not available [anywhere else](https://gitlab.com/fdroid/fdroidserver/blob/master/fdroidserver/scanner.py#L100)? Then maybe we'll need to build them...
<issue_comment>username_1: The [pipeline build](https://gitlab.com/fdroid/fdroiddata/-/jobs/622060673#L189) also shows a file not found error:
```
Traceback (most recent call last):
File "./issuebot.py", line 690, in <module>
main()
File "./issuebot.py", line 653, in main
appids, builds = get_builds_in_merge_request(gl, merge_request)
File "./issuebot.py", line 597, in get_builds_in_merge_request
with open(metadata_file) as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'metadata/io.simplelogin.android.fdroid.yml'
```
But it's unclear to me whether that's a fatal error.
<issue_comment>username_0: Ignore the bot and that specific error, it's a random infra issue.
Maybe next cycle will build it.
<issue_comment>username_1: okay, thanks!<issue_closed> |
<issue_start><issue_comment>Title: Investigate snap sync
username_0: <!-- Have you done the following? -->
<!-- * read the Code of Conduct? By filing an Issue, you are expected to -->
<!-- comply with it, including treating everyone with respect: -->
<!-- https://github.com/hyperledger/besu/blob/main/CODE_OF_CONDUCT.md -->
<!-- * Reproduced the issue in the latest version of the software -->
<!-- * Read the debugging docs: https://besu.hyperledger.org/en/stable/HowTo/Monitor/Logging/ -->
<!-- * Duplicate Issue check: https://github.com/search?q=+is%3Aissue+repo%3Ahyperledger/Besu -->
<!-- Note: Not all sections will apply to all issue types. -->
### Description
As an [Actor], I want [feature] so that [why].
### Acceptance Criteria
* [Criteria 1]
### Steps to Reproduce (Bug)
1. [Step 1]
2. [Step 2]
3. [Step ...]
**Expected behavior:** [What you expect to happen]
**Actual behavior:** [What actually happens]
**Frequency:** [What percentage of the time does it occur?]
### Versions (Add all that apply)
* Software version: [`besu --version`]
* Java version: [`java -version`]
* OS Name & Version: [`cat /etc/*release`]
* Kernel Version: [`uname -a`]
* Virtual Machine software & version: [`vmware -v`]
* Docker Version: [`docker version`]
* Cloud VM, type, size: [Amazon Web Services I3-large]
### Additional Information<issue_closed> |
<issue_start><issue_comment>Title: feat: dynamically adjust width of AA pixels to avoid overly long muta…
username_0: …tion groups
<issue_comment>username_0: This PR changes how long deletions (or other groups of mutations are displayed) Their size is currently fixed and at least a constant. Hence large groups take up more space than they should and this results in situation like the one below where the red mutation is actually at the same position in the two sequences (253)

In the lower case, it is part of a group and the group stretches further to the right than it should. This can also potentially obscure mutations. The PR changes this scaling such that the size of groups doesn't increase linearly with the size:

<issue_comment>username_1: @username_0 In the new version I don't see the mutation at all. I don't know what trade-offs are the most useful here, so I rely on your opinion and experience using Nextclade.
@username_2 What do you think?
<issue_comment>username_1: An additional or an alternative feature we may consider:
[feat: make results and tree pages full width #640](https://github.com/nextstrain/nextclade/pull/640)
<issue_comment>username_2: Looks good to me. It would be interesting to merge the wide screen PR into this one so we can tune the width accordingly. |
<issue_start><issue_comment>Title: Add datatable by h2oai
username_0: This package has > 1000 GitHub stars.
https://github.com/h2oai/datatable
<issue_comment>username_1: Thanks @username_0 for adding this package along with a test. The build succeeds.
A new Docker image including this package will be release later this week. |
<issue_start><issue_comment>Title: Common CSS/JS for Multilingual Blogs
username_0: Hey I found the best way for me to truly get a multilingual blog is to use some .bat files to run separate blogs for each language - (I can share my files if anybody wants to know how you can do this) - the CSS and JS files for all of the sites are the same though - how can I adjust hexo to add the JS and CSS to a folder that I specify, then use that folder and NOT the blogs's root folder for the /css and /js folders?
I hope I am making myself clear, as it can be a bit confusing! For example, I have these blogs:
http://monstercoding.com/blog_en/
http://monstercoding.com/blog_es/
http://monstercoding/com/blog_it/
And I want the CSS and JS files for these blogs to reside in the root CSS and JS folders (or another root folder, like /common_blog/ - and NOT in the actual blog's subfolders, as this is really redundant.
Thanks in advance!
<issue_comment>username_0: I hacked up a solution by hard coding the CSS and JS paths directly into the EJS template. I have to use my bat file to move those files to the appropriate location for the first blog and delete the folders for all of the other blogs. It feels like an awful waste of temp file space as well as time in hexo generation but I can't see another way to do this right now. Bottom line is that multilingual blog generation hexode right now is a completely hacked up affair, there is certainly a better way to do all of this!
<issue_comment>username_1: Hello, we are studying how to modify the generator plug-in to generate multi-language blogs without running separate blogs for each language. Please join the discussion here: https://github.com/hexojs/hexo/issues/1125
Thanks! |
<issue_start><issue_comment>Title: Make "replicate ls" not get larger and hard to read over time
username_0: This is covered in detail in #284. In short, as you make more and more experiments with more and more params, each experiment in `replicate ls` will grow taller and taller, making the output unusable.
Some ideas:
- Lean on experiment grouping as a way to fix this #297
- By default only show most recent experiments, then only params changed recently will be displayed
- Intelligently display only relevant params, somehow |
<issue_start><issue_comment>Title: Move most data bindings to garfield db
username_0: Document-related stuff is still in a separate `documents` schema, and carts (which are as of yet the only really odie-specific kind of data) are still in their own `odie` schema.
<issue_comment>username_1: Readme should probably mention (how) to create the two DBs and register the services.
<issue_comment>username_0: Is this stuck on something? Otherwise I'd like to merge this and follow it up with a small PR that factors out application config into a separate file.
<issue_comment>username_1: Yes, on me not being notified of new pushes :) |
<issue_start><issue_comment>Title: jless exits when trying to search if input was stdin
username_0: If i do `curl https://api.raydium.io/pools | jless` then i type `/` to start a search, jless exits immediately.
If instead, i use `curl … > /tmp/pools.json` then `jless /tmp/pools.json` i can search just fine.
At least i have a workaround :), thanks for this nice tool.
(woohooo, issue #1 ;))<issue_closed> |
<issue_start><issue_comment>Title: How to get back extra bits when having less illegal characters
username_0: In my JS use-case I found that I can allow ampersand and also allow null (taking into account that html converts it to 65533). Thus I am down from 6 to 4 illegal characters, requiring only 2 bits to encode instead of 3. I am trying to figure out how to change the scheme to get back that extra bit. |
<issue_start><issue_comment>Title: Correct off by one in ++gulf
username_0: ~[1 2 3 4 5 6 7 8 9 10]
```
<issue_comment>username_1: Ha! Some might consider this more of an opinion. Please tell me you changed all dependencies...
Sent from my iPhone
>
<issue_comment>username_0: The only thing extant that uses it is the various octo versions, which are being cleaned up anyway.
<issue_comment>username_1: Awesome.
Sent from my iPhone
>
<issue_comment>username_2: Inconsistent with the typical C for loop, but consistent with general Hoon practice |
<issue_start><issue_comment>Title: Unset saved errors on a POST
username_0: If a user submits an invalid input, then tries again with an input that triggers an exit page then on going back, the previous error still exists on the session and so is displayed. Reset any saved errors on any POST to ensure that previous POST's errors are removed.
<issue_comment>username_1: :+1: |
<issue_start><issue_comment>Title: Suggestion: Contract Programming
username_0: Contract Programming is a very nice way to prevent undetected failure, inspiring from [Dlang](http://dlang.org/contracts.html) here is a start of proposal :
A new compiler flag `-contract` (or `-debug`) , without this flag the compiler would just ignore contract related code
### Function :
```typescript
function myFunc(param: number): number
in {
assert(param > 0); // the assertion library is not part of typescript but any thirdparty
}
out (result) {
assert(result < 0);
}
body {
return -param;
}
```
would emit :
```javascript
function myFunc(param) {
(function () {
assert(param > 0);
})();
var _$result = (function (param) {
return -param;
}).apply(this, arguments);
(function (result) {
assert(result < 0);
}).call(this, _$result);
return _$result;
}
```
### Class
respecting a D-like contract for class is challenging, especially rules about inheritance, and the outputed JS might become unreadable, here is a 'simplified' proposition without any inheritance logic.
```typescript
class MyClass {
message: string = 'hello ';
invariant {
this.replaceMessage('test'); /// error invariant cannot call public member
assert(typeof this.message === 'string');
}
constructor(name: string)
in {
assert(typeof name === 'string');
}
//no out authorized for constructor
body {
this.message = this.message + name;
}
replaceMessage(message: string): string
in {
assert(message);
}
out (result) {
assert(result);
}
body {
this.message = message;
return this.message;
}
sayMessage() {
alert(this.message);
}
}
class ExtendedClass extends MyClass {
invariant {
assert(this.message.length > 0);
}
}
[Truncated]
MyClass.__$invariant = function () {
this.replaceMessage('test');
assert(typeof this.message === 'string');
}
return MyClass;
})();
var ExtendedClass = (function (_super) {
__extends(ExtendedClass, _super);
function ExtendedClass() {
_super.apply(this, arguments);
}
ExtendedClass.__$invariant = function () {
_super.__$invariant.call(this);
assert(this.message.length > 0);
}
return ExtendedClass;
})(MyClass);
<issue_comment>username_1: :+1:
Came here out of curiosity after viewing the discussion for a similiar feature in C# over @ Roslyn.
A similar syntax as is being proposed over there might work:
```typescript
insert<T>(item: T, index: number)
requires index >= 0 && index <= this.count
ensures return >= 0 && return < this.count
{
return this.insertCore(item, index);
}
```
which still comes down to the same:
```javascript
Array.prototype.insert = function(item, index) {
if(!(index >= 0 && index <= this.count)) {
throw "Failed requires.";
}
var __returnVal = this.insertCore(item, index);
if(!(__returnVal >= 0 && __returnVal < this.count)) {
throw "Failed ensures.";
}
}
```
or
```javascript
Array.prototype.insert = function(item, index) {
Assert.isTrue(index >= 0 && index <= this.count));
var __returnVal = this.insertCore(item, index);
Assert.isTrue(__returnVal >= 0 && __returnVal < this.count);
}
```
<issue_comment>username_2: Could decorators be used to support this? As suggested in the linked issue, something like:
@precondition("param > 0")
@postcondition("result < 0")
function myFunc(param: number): number {
return -param;
}
Now I don't have the experience to say but it looks as if something like this work would without any languages changes?
Perhaps you could even use proper expressions instead of string constants? Btw I'm just guessing here as I haven't taken the time to fully understand decorators but still interested in code contracts.
<issue_comment>username_3: I love the idea, I've used CodeContracts in my C# projects for years but recently dropped it because the development is not as active as it should be but really dislike the proposed syntax.
I really love the syntax proposed by @username_1 and iirc it's similar to SpecSharp.
<issue_comment>username_4: Does the typescript team even feels favorable to the very concept of contracts ?
I feel this might become more and more popular thanks to people now being exposed to things like clojure.spec. To me, it seems contracts/specs/etc value really (only) shine when used together with automatic/generative testing. Having to go to the proper screen, play with it till we reach the right state to trigger the proper function, to finally get our contract assert code trigger is not good and barely a step up from dynamic programming (React likes do to a lot of these runtime assertions)
Also, there might be some overlap with other tickets about dependent typing. Both have the same end goal: Express value constraints, with types or not.
While I assume runtime assertions would be way easier to implement, I feel that dependent types would be a much superior alternative in the absence of generative testing.
<issue_comment>username_3: If they will introduce contracts into the language then they _might_ provide static analysis but in case they won't then anyone else can pick it up and do it but first it needs to be formalized in the language.
<issue_comment>username_5: @username_4 just to add more information, contract-based programming has existed in Eiffel for a very long time and that's a language that compiles to C. They [provide different compilation options to check assertions and contracts at run-time](https://www.eiffel.org/doc/eiffel/ET%3A%20Design%20by%20Contract%20(tm),%20Assertions%20and%20Exceptions#Run-time_assertion_monitoring). You can use contracts to generate tests and documentation, but run-time assertions are something I would highly recommend. You want to fail fast and visibly (even in user-facing applications).
<issue_comment>username_6: In a way, TypeScript is doing this. StrictNullChecks for example will let you know upstream if you are going to cause a null reference. :)
<issue_comment>username_7: @username_6 That's closer to the distinction between `Type` and `Maybe<Type>` (where the latter is defined as `type Maybe<T> = T | void`). Constraints are different. There's a much greater difference between a sugared `Type? = Maybe<Type>` and `param: Type where param > 10`. Additionally, TypeScript retains an extraordinarily high bar on what features merit an actual emit. (Type-safe enums and namespaces hit that bar when they were first introduced, but integer casts haven't hit even close yet.)
As for where I stand, I'd prefer something that allows constraints to be verified at compile-time *without* an emit, but implementing it would be far from trivial best case scenario, and the compiler, from what I've gathered, is a massive hack in its current state.
<issue_comment>username_6: @username_7 I'm with you. I'm just happy to see it doing something similar at the moment.
Would love to see full contract capability.
<issue_comment>username_8: Looking forward to it!
<issue_comment>username_9: Just for reference. Contractual uses a compiler to generate the contracts. If compiler not used, the extra code remain as unused objects. (maybe those are eventually stripped by normal JS runtime or ts). See http://codemix.github.io/contractual/index.html#usage
<issue_comment>username_3: @username_9 Most DBC implementations exist as a compile-time feature whether the ability to express the contracts is baked into the language and whether an analyzer is built-in to the compiler but it's definitely not a run-time feature so I'm not sure what you're on about.
<issue_comment>username_10: @username_0 It's a long time since this feature request... do you know the current state of "design by contract" in TypeScript... maybe something as a third party library you could suggest?
<issue_comment>username_11: I'd love to see a library that mimics the `Preconditions` utility class from the Java library called Guava. Would need to be tailored a little bit to the language specifics, but for the most part it would be great just as is IMHO.
Been considering writing something like that myself for a long time, but time is always an issue of course.
Big thumbs up for this thread in general. 👍
<issue_comment>username_12: @username_11 https://github.com/codemix/contractual<issue_closed>
<issue_comment>username_13: This is very much outside our modern design goals. |
<issue_start><issue_comment>Title: Update for multi-user apps
username_0: * Remove old event callback URL in favor of simplified flow
* Add Store model
* Keep track of auth token against store, not user
* Add remove-user callback
* Add multi-user load support
* Add uninstall callback
* Remove token exchange code
* Misc cleanup and refactor
<issue_comment>username_0: ping @philipmuir @npadilla @username_1
<issue_comment>username_1: :+1:
<issue_comment>username_0: :hammer: |
<issue_start><issue_comment>Title: [WIP] Change postgresql92 SCL references to rh-postgresql94.
username_0: https://trello.com/c/aLn9eAj1
Related PRs:
https://github.com/ManageIQ/manageiq/pull/3676
https://github.com/ManageIQ/manageiq-appliance/pull/9 [this branch]
https://github.com/ManageIQ/manageiq-appliance-build/pull/19
Depends on:
- [ ] Depends on https://github.com/ManageIQ/manageiq-appliance-build/pull/19
<issue_comment>username_0: This is ready to be un-WIP'd and merged after ManageIQ/manageiq-appliance-build#19.
<issue_comment>username_1: I think selinux will probably be using equivalence
can you ensure that the right selinux tag is on the data directory?
ls -Z /opt/rh/rh-postgresql94/root/var/lib/pgsql/data/pg_log/
<issue_comment>username_0: UGH, they changed the default data directory, < I think >
`ls -Z /opt/rh/rh-postgresql94/root/var/lib/pgsql/data/pg_log/` doesn't exist. :cry:
`/var/opt/rh/rh-postgresql94/lib/pgsql/data` looks to be the new location.
So, choices are, set PGDATA environment variable and pass `-D #{PostgresAdmin.data_directory}` to `initdb` or use the new default location everywhere.
Thoughts @username_1?
<issue_comment>username_0: In other words, it worked, we created the database disk in my test vm, but initdb setup the database in `/var/opt/rh/rh-postgresql94/lib/pgsql/data` and that's what used when the database starts. It basically ignored our customization.
<issue_comment>username_0: cc @gtanzillo @username_2 Thoughts?
<issue_comment>username_2: My take is leverage APPLIANCE_PG_DATA everywhere for the appliance, the only place you may not be able to use are the couple of places in the kickstart itself, which is ok for now.
<issue_comment>username_0: I went with the route of specifying the PGDATA env variable in the hopes that initdb and all postgres functions will use it to find the data directory. The alternative is to be victim to the default data directory changing within the packaged postgresql.
Running a build with this to see how it goes.
<issue_comment>username_1: We are going to use the same version of postgres for a while. Let not start adding all sorts of environment variables.
(Same as with Apache. We're not changing that directory name for the next year, adding an env variable is just adding complexity.)
<issue_comment>username_0: Yes @username_1, it didn't work anyway, it didn't honor PGDATA environment variable. Instead, I'll move all the hardcoded paths to the new `/var/opt/rh/rh-postgresql94/lib/pgsql/data` path.
<issue_comment>username_0: This is ready to go now. I did a full end-to-end build with the other PRs and it works. :sunglasses: |
<issue_start><issue_comment>Title: Step by Step tutorial for teststack.White
username_0: I am a new user for teststack.White and I really don't understand how to get start with White.
I am wish to use teststack,White to automated testing with silverlight application.
Please give me some Step by step information for how to getting start with White, and how to automated testing for silverlight application.
Thank for the time to read this and answering my question.
<issue_comment>username_1: Hi,
I would recommend to check http://docs.teststack.net/White/GettingStarted.html. There you have at least basic steps on how to start with TestStack.
<issue_comment>username_0: hi username_1,
Thanks for the advice, I am really not understand what to do, after I read that.
all the code there, need run with visual studio? and I just create a new project in visual studio for C#?
<issue_comment>username_1: Hi :)
yes - this all is C# code, which you write and run as part of some testing project in visual studio. There are plenty of tutorials on the web - google will help.
Examples:
https://www.google.sk/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=visual%20studio%20create%20test%20project
https://www.google.sk/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=nunit%20tutorial%20c%23
<issue_comment>username_2: Silverlight is not really support anymore. It had no tests/documentation when I picked up the library and I am not willing to invest time in it when there are so many other things to focus on in White.
That said there should be some blog posts around if you have a look around.
Here is an article on the subject: http://www.codeproject.com/Articles/35042/Automated-Testing-in-Silverlight
<issue_comment>username_2: Here is another - https://social.msdn.microsoft.com/Forums/silverlight/en-US/cc41e065-7012-4096-9dc1-4d008881e523/silverlight-testing-with-white-trying-to-login-via-silverlight-business-application-templates?forum=silverlightarchieve
<issue_comment>username_0: Thanks username_1 and Jake,
Can I know what project should I create in VS? is it asp.NET Web Application? or others?
and do I use xunit is correct for silverlight c#?
<issue_comment>username_0: Hi Pgurbal and Jake,
Is that only Visual Studio Ultimate or Visual Studio Premium can do the UI Test?
that's mean I can't do anything for automated UI Test using Visual Studio Professional right?
Do you know about others open source Automated testing tools for silverlight?
<issue_comment>username_2: @username_0 Ultimate and Premium have something called Coded UI which is a commercial UI testing tool.
You can use White with Professional/Community edition. Just create a normal class library project, install xUnit/nUnit or any other test framework library from nuget then you use white within a unit test.
I don't know of any other open source projects doing a similar thing to White.
<issue_comment>username_0: @username_2
Thanks for the reply. You let me much more understand about white.so I need to install XUnit from nuget right? for silverlight.
<issue_comment>username_2: Correct, you create a *normal* .net class library, not a silverlight library. In your normal xUnit test you will get White to open a browser, navigate to your silverlight applications URL then you can start automating it
<issue_comment>username_0: @username_2
when I install xunit there come out an error
"
Install failed. Rolling back...
Install-Package : Could not install package 'xunit.extensibility.core 2.0.0'. You are trying to install this package into a project that targets '.NETFramework,Version=v4.0
', but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.
At line:1 char:16
+ Install-Package <<<< xunit -Version 2.0.0
+ CategoryInfo : NotSpecified: (:) [Install-Package], InvalidOperationException
+ FullyQualifiedErrorId : NuGetCmdletUnhandledException,NuGet.PowerShell.Commands.InstallPackageCommand
"
do you know what is this problem and how can I solve this?
<issue_comment>username_2: Xunit 2.0 only supports .net 4.5.
Sent from my Windows Phone
<issue_comment>username_0: @username_2 Thanks! You are helping me very much to get familiar to White!
<issue_comment>username_3: @username_2 I have created YouTube demo video on SpecFlow & White for testing UI on .NET based Windows Desktop Application. Please feel free to share for those needed.
@username_0 The 10 min. video should give you better idea of working with White under VisualStudio IDE.
https://youtu.be/LG_0SEeCwh0
<issue_comment>username_2: // @mwhelan @username_4
<issue_comment>username_4: Cool!
>
<issue_comment>username_0: @username_3 Thanks for the video. After viewing your video, I got some question want to ask.
1.Dose the step is the same, while I wan to carry out test for Silverlight?
2. What is the Application you use to identify the object? and what name should I look for or type in? is it " Automation ID" or "Process ID" ? is it same if I use UI SPY?
Thanks for your help<issue_closed>
<issue_comment>username_2: @username_3 I have just moved the docs into the repo so it's easier to contribute. Did you want to put a link to your video into the Getting started page on the docs.
Gonna close out this issue, but anyone in this thread, a PR to improve the getting started docs would be great
<issue_comment>username_3: @username_2 yes, that's fine.
<issue_comment>username_5: Hi,
Does TestStack.White supports ASP.NET or web application. Is it able to detect Controls on ASP.NET application.
<issue_comment>username_6: Hi @username_5 : please have a look at https://github.com/TestStack/White/issues/383
<issue_comment>username_5: Hi @username_6 ,
Thanks for your reply. I saw #383 but still not getting my answer. What i exactly want to know is:
I am working on a ASP.NET application. The framework which i am using for detecting UI Controls are TestStack.White and Windows Automation. Although the object is having enough properties like Name , Controltype -> Hyperlink and Framework Id -> "Internet Explorer". But still not able to detect control with enough properties.
Do white is able to detect Controls on ASP.NET or Web Application?
<issue_comment>username_2: Nope, white is for desktop applications. I suggest Selenium instead.
<issue_comment>username_6: Hi @username_5 ... maybe you missed it in the linked issue.
"check it with UIAVerify or Inspect. If either of them shows something - then White also should"
But as @username_2 says. Selenium might be your better bet.
<issue_comment>username_5: Yes. You can
<issue_comment>username_7: Thanks for your reply! But how to debug this solution if there is no executable project as the startup project? You know, there is only one project which contains the class for test methods but it cannot be required in Main.
发自Fiona的 iPhone
在 2018年8月9日,16:27,Prachi jain <[email protected]<mailto:[email protected]>> 写道:
Yes. You can
<issue_comment>username_5: Which editor you're using. Also can you please elaborate your problem?
On Thu, Aug 9, 2018, 11:44 PM username_7 <[email protected]> wrote:
> Thanks for your reply! But how to debug this solution if there is no
> executable project as the startup project? You know, there is only one
> project which contains the class for test methods but it cannot be required
> in Main.
>
> 发自Fiona的 iPhone
>
> 在 2018年8月9日,16:27,Prachi jain <[email protected]<mailto:
> [email protected]>> 写道:
>
> Yes. You can
>
>
<issue_comment>username_7: Hi Prachi, I am using VS 2017 and here is the screenshot of my simple source code

but I got the error when starting the debugging

How can I generate the exe file?
<issue_comment>username_5: Right click under test method block and select debug.... |
<issue_start><issue_comment>Title: Errand `register_admin_ui' doesn't exist
username_0: I'm getting an error stating that this errand doesn't exist. I manually ran the uaac steps from https://github.com/cloudfoundry-incubator/admin-ui, and it seems to have given access to DEA, apps, and a couple other things. But I'm still getting "This page requires data from services that are currently unavailable" on the orgs page, and a could others. Any idea why the errand doesn't exist? Maybe I used an outdated bosh release? I used the one from the README at https://admin-ui-boshrelease.s3.amazonaws.com/boshrelease-admin-ui-3.tgz.
<issue_comment>username_1: I actually have the exact same thing. I'm not able to explain why the errand is not found in the release, as I'm not that familiar with bosh. Running the ```uaac``` commands manually indeed gives basic access to the UI, but a lot of pages give the ```This page requires data from services that are currently unavailable``` error.
In other words, +1 for this issue and any help would be appreciated.
<issue_comment>username_1: Okay, so this was actually quite easy to fix. As suggested in https://github.com/cloudfoundry-incubator/admin-ui/issues/89 the issue was that those specific tabs try to fetch data over HTTPS, which won't work out of the box with self-signed certificates provided by bosh-lite.
Fixing it is rather easy though: you need to add ```cloud_controller_ssl_verify_none: true``` to your release manifest, which will make it look somewhat like this
```
properties:
admin_ui:
cloud_controller_uri: https://api.x.x.x.x.xip.io
cloud_controller_ssl_verify_none: true
uaa_admin_credentials:
password: admin
username: admin
ui_admin_credentials:
password: admin
username: admin
ui_credentials:
password: user
username: user
uri: admin.x.x.x.x.xip.io
```
By running
```
bosh deployment releases/<your deployment>.yml
bosh deploy
# verify and accept changes
```
Your changes will be applied and deployed. **Note** that the first time you access the admin UI the tabs that used to work like DEA and Apps might now give you an error. Just give it a couple of minutes (~5) to pull in the first data. After this all tabs should work.
@username_0 I highly doubt you're still looking into this, but anyway: ping :)
<issue_comment>username_0: :+1: Thanks for the useful info @username_1. I'm not working with it anymore, but I'm sure this is helpful for other who run into the same issue. I'll leave the issue open, as it would be great if the errands worked to register this stuff automatically... But I don't have time to look into fixing it.
<issue_comment>username_2: @username_0 the errands where added in v4 (you where using v3). Since then v5 has been released. I [have updated](https://github.com/cloudfoundry-community/admin-ui-boshrelease/commit/54870e6638c88f55c4c080495973ee8880554764) the README to point to v5.<issue_closed>
<issue_comment>username_0: Thanks @username_2! |
<issue_start><issue_comment>Title: Tidy Append has broken Listview in widgets_overview_app.py
username_0: When the dialog is confirmed the key of the selected item in dlistView is used to select the same item in listView in the main page. This relies on the same set of keys being generated for both listViews.
The code of listView needs to be the same as dlistView, but much better could new_from_list() use enumerate to generate a set of integer keys starting in zero. Integer keys starting at 0 are useful because it allows Apps to access internal arrays using the same index value as the remi Listviews<issue_closed>
<issue_comment>username_1: Hi Ken, now should be fixed. I used enumeration as you suggested. |
<issue_start><issue_comment>Title: No maintainer for semantic-release/changelog?
username_0: ## Current behavior
Non-code issue. I opened a PR with `semantic-release/changelog`, but didn't get any feedback from a maintainer for almost a year now. Got feedback from some contributors.
## Expected behavior
Get feedback on open PR's.<issue_closed> |
<issue_start><issue_comment>Title: TabbedPage
username_0: Hi, first, i have to say that this is a great project :).
I am wondering that tabbedPage is acting kind of weird.
TabbedPage is looking normal at portrait mode

But in landscape mode, tabs are should be under the NavBar(Toolbar, or ActionBar) right?

<issue_comment>username_1: That's definitely a bug. I hadn't tested with changing orientation. I'll take a look this weekend.
<issue_comment>username_1: Can you describe how you have your pages laid out? I'm assuming it's the following?
MasterDetailPage
+ Detail = TabbedPage
+ Master = Menu
<issue_comment>username_1: So I looked into this further and it appears to be correct in how it's working now. If you disable the AppCompat package and return to normal Xamarin.Forms implementations, you get the same behavior.
The issue is that AppCompat activities (and this goes for any activity that uses the SupportActionBar) behave the way they do pre-21. Since tab management from ActionBar has been deprecated, what you're seeing is the old behavior.
If you want the new tab behavior in 21+, you'll have to use the TabLayout control which means creating your own renderer and not using the default TabbedPage.
Since this deviates from the normal Xamarin.Forms abstraction, I'm sorry to say this won't be fixed. The project goal is to maintain the same behaviors for Xamarin.Forms while providing as much Material Design as possible. Please see the Project Goals on the [wiki](https://github.com/nativecode-dev/oss-xamarin/wiki).<issue_closed> |
<issue_start><issue_comment>Title: pronly-true-warning: changed includes file reported in parent file
username_0:
<issue_comment>username_1: Docs Build status updates of commit _[13d596e](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/commits/13d596e643fc680fa2791ccd34278a8acb83ab7a)_:
### :white_check_mark: Validation status: passed
File | Status | Preview URL | Details
---- | ------ | ----------- | -------
[E2E_DocsBranch_Dynamic/pr-only/includes/skip-level.md](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/blob/pronly-on-skipLevel-warning/E2E_DocsBranch_Dynamic/pr-only/includes/skip-level.md) | :white_check_mark:Succeeded | [View](https://ppe.docs.microsoft.com/en-us/E2E_DocFxV3/pr-only/skiplevel?branch=pr-en-us-27884) |
For more details, please refer to the [build report](https://opbuilduserstoragepubdev.blob.core.windows.net/report/2021%5C12%5C23%5C8b165afe-a866-8c48-e5c3-ead94440f27e%5CPullRequest%5C202112232107589967-27884%5Cworkflow_report.html?sv=2020-08-04&se=2022-01-23T21%3A08%3A29Z&sr=b&sp=r&sig=KfYveq%2B2ch6WglBzqJleTCjeVej02n62Zz5ho8HFnM4%3D).
**Note:** Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the [broken link report](https://docs-portal-pubdev-wus.azurewebsites.net/#/repos/8b165afe-a866-8c48-e5c3-ead94440f27e?tabName=brokenlinks).
For any questions, please:<ul><li>Try searching the docs.microsoft.com <a href="https://review.docs.microsoft.com/en-us/help/?branch=main">contributor guides</a></li><li>Post your question in the <a href="https://teams.microsoft.com/l/channel/19%3a7ecffca1166a4a3986fed528cf0870ee%40thread.skype/General?groupId=de9ddba4-2574-4830-87ed-41668c07a1ca&tenantId=72f98bf-86f1-41af-91ab-2d7cd011db47">Docs support channel</a></li></ul> |
<issue_start><issue_comment>Title: .numeric() issue
username_0: I have a price field and I'm having an issue where I want the max price to be 999999.99 with a max of 6 digits pre decimal and 2 digits after. If I type a 6 digit number then type the decimal, I can't add numbers after the decimal. The only work around I found is to set the maxPreDecimalPlaces to 7 but then this allows the user to enter a 7th digit pre decimal which I can't allow.
I've tried every combination of options I could think of any none of them allowed me to type after the decimal once the pre decimal limit, max digits or max number was reached.
Here is the code I'm using:
$("#price").numeric({ allowMinus: false, allowThouSep: false, allowDecSep: true, maxDecimalPlaces: 2, maxPreDecimalPlaces: 6, , maxDigits: 8 });
If I enter 123456. then I can no longer enter the decimal digits unless I change the maxDigits to 9 or maxPreDecimalPlaces to 7.<issue_closed> |
<issue_start><issue_comment>Title: WriteableBitmap: doesn't display anything on Linux (and I don't know about other platforms)
username_0: **Describe the bug**
WriteableBitmap doesn't display anything on Linux.
The same code does display an image on Windows.
No platform-specific code is used.
**To Reproduce**
Steps to reproduce the behavior:
1. Clone this repo: https://github.com/OpenRakis/Spice86
2. Grab a supported DOS game, like Dune or Prince of Persia
3. Run it with the following command line
```bash
Spice86 -e /path/to/executable
```
4. See a black image.
**Expected behavior**
The same image should display on every supported platform.
**Screenshots**
✓ Windows:

❌ WSL2 (Ubuntu 20.04) and Xubuntu 20.04 (or any other desktop Linux distro):

? - Other platforms
I don't have a Mac for example.
**Desktop (please complete the following information):**
- OS: [Ubuntu 20.04]
- Version [0.10.13]
**Additional context**
Here is the code that updates the image, and at the end invalidates the visual on the UI thread (VideoBufferViewModel, view is VideoBufferView.xaml with the associated code-behind file) :
```csharp
public unsafe void Draw(byte[] memory, Rgb[] palette) {
if (_disposedValue || UIUpdateMethod is null || Bitmap is null) {
return;
}
int size = Width * Height;
long endAddress = Address + size;
if (_appClosing == false) {
using ILockedFramebuffer buf = Bitmap.Lock();
uint* dst = (uint*)buf.Address;
switch (buf.Format) {
case PixelFormat.Rgba8888:
for (long i = Address; i < endAddress; i++) {
byte colorIndex = memory[i];
Rgb pixel = palette[colorIndex];
uint rgba = pixel.ToRgba();
dst[i - Address] = rgba;
}
break;
case PixelFormat.Bgra8888:
for (long i = Address; i < endAddress; i++) {
byte colorIndex = memory[i];
Rgb pixel = palette[colorIndex];
uint argb = pixel.ToArgb();
dst[i - Address] = argb;
}
break;
default:
throw new NotImplementedException($"{buf.Format}");
}
Dispatcher.UIThread.Post(() => UIUpdateMethod?.Invoke(), DispatcherPriority.MaxValue);
}
}
```
Both on Linux and Windows, the case being run is always PixelFormat.Bgra8888.
Using another pixel format (pixel.ToRgba, or pixel.ToBgra) only breaks the Windows use case.
Using random values does display something, so bindings and invalidation are OK.
<issue_comment>username_1: WriteableBitmap works for me on all platforms so I know the basics work. You should use RowBytes to get the stride of each line, it is not guaranteed to be the same as Width, but I doubt this causes your problem. I use AlphaFormat.Premul maybe there is an issue with AlphaFormat.Unpremul.
<issue_comment>username_0: I tried all combinations I could think of, same results.
Alpha values are set to FF in the uint value.
I don't have a clue, other that *on some Linux machines* it does work.
So this suggests, along with the fact that it always work on Windows, that the error is *not* in user code.
<issue_comment>username_1: RowBytes should be respected as it is defined by the api, there is no guarantee that the scan lines of the WriteableBitmap are contiguous, it is quite possible that the backend will have RowBytes larger than width, maybe for alignment or other perf reasons, so your pixels could become out of sync.
You could try disabling gpu in app builder to see if that makes any difference.
<issue_comment>username_0: Neither Image nor WriteableBitmap expose a RowBytes API.
The online documentation doesn't mention it either.
Could you give me an example ?
<issue_comment>username_1: ```
unsafe
{
fixed(uint* pPallete = pallete)
fixed (byte* pBuffer = buffer)
{
byte* src = pBuffer;
uint* pal = pPallete;
using (var pixels = bm.Lock())
{
uint* p0 = (uint*)pixels.Address;
int stride = pixels.RowBytes / 4;
for (int y = 0; y < h; y++)
{
uint* p = p0 + y * stride;
byte* endp = src + w;
while (src < endp)
{
var val = *src++;
*p++ = *(pal+val);
}
}
}
}
}
```<issue_closed>
<issue_comment>username_0: Corrected on my end.
Previously I was able to reproduce this issue with WSL2 and the proprietary nvidia driver.
The issue doesn't manifest itself anymore with this new code:
```csharp
public unsafe void Draw(byte[] memory, Rgb[] palette) {
if (_appClosing || _disposedValue || UIUpdateMethod is null || Bitmap is null) {
return;
}
int size = Width * Height;
long endAddress = Address + size;
using ILockedFramebuffer pixels = Bitmap.Lock();
uint* firstPixelAddress = (uint*)pixels.Address;
int rowBytes = Width;
long memoryAddress = Address;
uint* currentRow = firstPixelAddress;
for(int row = 0; row < Height; row++) {
uint* startOfLine = currentRow;
uint* endOfLine = currentRow + Width;
for(uint* column = startOfLine; column < endOfLine; column++) {
byte colorIndex = memory[memoryAddress];
Rgb pixel = palette[colorIndex];
uint argb = pixel.ToArgb();
if(pixels.Format == PixelFormat.Rgba8888) {
argb = pixel.ToRgba();
}
*column = argb;
memoryAddress++;
}
currentRow += rowBytes;
}
if (!IsDrawing) {
IsDrawing = true;
}
Dispatcher.UIThread.Post(() => UIUpdateMethod?.Invoke(), DispatcherPriority.MaxValue);
}
```
it also still works on Windows and on Linux with the nouveau driver for nvidia hardware.
https://github.com/OpenRakis/Spice86/pull/85
Issue closed. Thank you ! |
<issue_start><issue_comment>Title: No terrain visible in the 3 runnable scenes
username_0: Godot Engine v3.2.3.stable.custom_build - https://godotengine.org
OpenGL ES 3.0 Renderer: GeForce RTX 2070 SUPER/PCIe/SSE2
Windows 10
Fresh git clone of Voxel Tools.
Compiles OK.
Can add VoxelTerrain under Editor node search.
Fresh zip download of voxelgame
8 warnings on running blocky_game/blocky_game.tscn - all minor
Camera moves OK.
Can open inventory with "E" key
Debug GUI reads:
Pointed voxel: ---
streaming tasks: 0
streaming active threads: 0
streaming thread count: 1
meshing tasks: 0
meshing active threads: 0
meshing thread count: 2
Main thread block updates: 0
** No terrain visible **
Apologise for what is likely a silly question but why can't I see any terrain? It is because you didn't include your save terrain file in the repo? Shouldn't the fallback generator kick in?
The only terrain scene I can view OK is res://blocky_game/generator/test_generator.tscn
<issue_comment>username_0: I added a default VoxelViewer as a child node of VoxelTerrain and the terrain appears.<issue_closed> |
<issue_start><issue_comment>Title: SubjectLocation (FocalPoint) sometimes only appears on second pageload
username_0:
<issue_comment>username_1: Hi @username_2 I can't reproduce this issue anymore in current filer version 1.2.0. Do you still have this problem? Thank you for your feedback.
<issue_comment>username_2: I will not be able to confirm this in a predictable time frame. Sorry.
Let's just assume it's working if you've checked with images that were loaded from browser's cache.<issue_closed>
<issue_comment>username_1: @username_2 I have checked and it works fine. I will close this issue. If you will find this problem again please let us know. Thank you. |
<issue_start><issue_comment>Title: Document how the user can pass Synapse credential to the Docker image
username_0:
<issue_comment>username_1: There are multiple options.
* `cli` takes synapse username and password/apikey as parameters
* mount the `~/.syanspeCconfig` to `/root/.synapseConfig`, but this would force people to have a `~/.synapseConfig` file locally.
<issue_comment>username_0: I guess that it's the easiest solution. But since it's not recommended to enter the credential in clear in the command line, we could recommend to create environment variable SYNAPSE_USERNAME and SYNAPSE_API_TOKEN and import them as environment variables, e.g. `docker run -e SYNAPSE_USERNAME ...`.
```
host$ export PLOP="hello"
host$ echo $PLOP
hello
host$ docker run -it -e PLOP ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
d72e567cc804: Pull complete
0f3630e5ff08: Pull complete
b6a83d81d1f4: Pull complete
Digest: sha256:bc2f7250f69267c9c6b66d7b6a81a54d3878bb85f1ebb5f951c896d13e6ba537
Status: Downloaded newer image for ubuntu:latest
root@5029fc4d3c9a:/# echo $PLOP
hello
```
Then there are two solutions to use these credentials in the container:
1. The client reads the environment variables and does `syn.login(username=..., api_token=...)` (I forgot the actually variable names) OR
2. Use an initialization script that creates *.synapseConfig* and places it in the user folder (ultimately non-root user), then have the client uses `syn.login()`. I'm using this approach for the 2014 i2b2 data node and could put that in place here too.
https://github.com/Sage-Bionetworks/nlp-sandbox-data-node-i2b2-2014/blob/develop/server/root/etc/cont-init.d/30-get-data#L9-L13
<issue_comment>username_1: I don't really have a strong preference in any of these solutions. I have implemented all of them for particular projects I'm on to give users the choice to provide synapse login of their choice. I do however feel that all three of these solutions are a little bit clunky.
<issue_comment>username_1: The documentation is done, but the implementation still needs to be complete.<issue_closed> |
<issue_start><issue_comment>Title: Be able to pass params to 'all()' and 'show()'
username_0: I was playing with the library the other day, and I may be mistaken but it would be nice to be able to optionally pass params to ```all()``` and ```show()``` to do things like pagination and search for example. |
<issue_start><issue_comment>Title: allow spaces in formulas
username_0: The parser is really picky...
<issue_comment>username_1: Quite right. The usual approach:
Define a function
```
lexeme :: Parser a -> Parser a
lexeme p = p <* spaces
```
and wrap every individual parser component with it, (Maybe name it `l` instead then :-)), and additionally allow `spaces` at the beginning of the parse.<issue_closed> |
<issue_start><issue_comment>Title: Is it possible to place batch order for futures in binance?
username_0: I do know that there is a post with regards to placing multiple orders but that was last year. I wish to check if this was updated and can now place multiple orders in binance? Because I do know that there is a futures_place_batch_order function but it kept saying that it can only take in one positional argument with the following format below:
```
client.futures_place_batch_order(((client.futures_create_order(symbol=ticker, side=SIDE_BUY,type=ORDER_TYPE_LIMIT, quantity=noOfShares, price =buyPrice,timeInForce=TIME_IN_FORCE_GTC)),
(client.futures_create_order(symbol=ticker, side=SIDE_SELL, type=FUTURE_ORDER_TYPE_STOP_MARKET, stopPrice=stopLoss,workingType="MARK_PRICE",closePosition=True)),
(client.futures_create_order(symbol=ticker, side=SIDE_SELL, type=FUTURE_ORDER_TYPE_TAKE_PROFIT_MARKET, stopPrice=targetTwo,workingType= "MARK_PRICE",closePosition=True)))
)```<issue_closed>
<issue_comment>username_1: Hey @username_0 ,
did you by any chance figure it out?
thanks for answer
brg
<issue_comment>username_0: Hi,
I did not manage to figure out as I realise I do not require this method in the end.
Best Regards,
Deming |
<issue_start><issue_comment>Title: Add Location.search() method
username_0:
<issue_comment>username_1: Sorry for the very late response. Could you verify that the first parameter of the method is indeed optional?
I checked the documentation, but it looks like only the second parameter (`paramValue`) to be optional, which would make it unnecessary to add another method:
* https://docs.angularjs.org/api/ng/service/$location
<issue_comment>username_0: Angular docs leave something to be desired =[
From https://github.com/angular/angular.js/blob/v1.4.x/src/ng/location.js#L392 (and also the docs, just not the method definition line)
` * @return {Object} If called with no arguments returns the parsed `search` object. If called with
* one or more arguments returns `$location` object itself.`
<issue_comment>username_0: Really, the angular docs probably get across what's needed for a javascript developer, but they aren't sufficiently accurate for direct translation into a type system =/
<issue_comment>username_1: I just checked the source code to confirm what you said and indeed you are right about it :)
I so much hate Javascript... grrrr....
<issue_comment>username_0: yeeep! Thanks for the merge. |
<issue_start><issue_comment>Title: thread 'rustc' panicked at 'found unstable fingerprints for optimized_mir
username_0: <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
First of all, thanks for all your hard work on the compiler!
I ran into the following issue while doing a `cargo build` of the following repo: [`parity-bridges-common`](https://github.com/paritytech/parity-bridges-common). I'd been `check`-ing and `build`-ing my code fine but at one point just got stuck in this state where it would panic with this message on every build.
I wasn't able to figure out how to produce a minimal example. I also wasn't able to reproduce it by just building the offending packages (e.g `millau-bridge-node` or `rialto-bridge-node` based off the error output) individually.
Unfortunately (or maybe fortunately?) I wasn't able to reproduce by building with the nightly compiler.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
$ rustc --version --verbose
rustc 1.52.0 (88f19c6da 2021-05-03)
binary: rustc
commit-hash: 88f19c6dab716c6281af7602e30f413e809c5974
commit-date: 2021-05-03
host: x86_64-unknown-linux-gnu
release: 1.52.0
LLVM version: 12.0.0
```
```
$ rustc +nightly --version --verbose
rustc 1.54.0-nightly (bacf770f2 2021-05-05)
binary: rustc
commit-hash: bacf770f2983a52f31e3537db5f0fe1ef2eaa874
commit-date: 2021-05-05
host: x86_64-unknown-linux-gnu
release: 1.54.0-nightly
LLVM version: 12.0.0
```
### Error output
I captured the [following error output](https://gist.github.com/username_0/945935054c167647795fe7e4d4f9e3c5) after running `RUST_BACKTRACE=1 cargo build`. Since the output is so large I thought it would be best to put it in a Gist.<issue_closed>
<issue_comment>username_1: Thanks for filing the bug report! We just shipped a [patch release to work around this issue](https://blog.rust-lang.org/2021/05/10/Rust-1.52.1.html) and are currently triaging related issues related to ensure each underlying problem gets resolved. Since this appears to be a duplicate of #84960, I'm going to close in favor of that issue. |
<issue_start><issue_comment>Title: JPEG images failling, being identified as MPO
username_0: Hi,
Some of our images are failing with the error "Unknown format: MPO". The images are JPEG but Pillow is for some reason thinking they are MPO. I've reported this issue to Pilllow here: https://github.com/python-pillow/Pillow/issues/1138
I realize this is not your issue, it's Pillows, but I thought I'd let you know about the problem. Maybe there's a workaround you can do, where if it detects an MPO it tries to process it as a JPEG anyway? If not, I understand. Thanks for reading.
<issue_comment>username_1: @username_0 You haven't demonstrated this issue to be Pillow's problem yet. It very well may be, but let's be sure about it first.
<issue_comment>username_0: @username_1 Right you are, I didn't intend blame, that's just my best guess based on the fact that Pillow is returning "MPO" as the format, and Pilbox only throws this particular error if the format returned is not in Pilbox's list of known formats. If not Pillow's error, the blame would probably fall on my images being invalid.
<issue_comment>username_1: @username_0 No problem. And as I said, "probably" is not particularly helpful in this case :smile:. We need to know *exactly* what is going on and why. Additionally it sounds like pilbox may want to consider adding support for MPO in the event that "real" MPO images are discovered. That's not a workaround, it's a feature add. (Disclaimer: other than its similarity to JPEG I know nothing about MPO. Furthermore, I have no low-level knowledge of any image format; I'm the Pillow fork-author and project leader speaking on behalf of Pillow. @wiredfool is your best hope of getting programming help from the Pillow team.)
<issue_comment>username_2: Admittedly, I don't know much (anything) about MPO files, but here's what Pillow supports for it:
```
Pillow identifies and reads Multi Picture Object (MPO) files, loading the primary image
when first opened. The seek() and tell() methods may be used to read other pictures
from the file. The pictures are zero-indexed and random access is supported.
```
So it looks like Pilbox could access the individual JPEG files that comprise the MPO file and Pilbox could resize those files, but I'm not sure that Pillow supports saving the correspondingly resized MPO file. If that's the case, the output for these file types would have to default to JPEG for _only_ the first image in the MPO and MPO would _not_ be an allowable output format. I suspect that would satisfy most needs, but seems less than the ideal behavior. Pilbox is really intended to return the images in format that it receives them and this would complicate the output format logic. Anyway, I'll do more research on it this weekend.
<issue_comment>username_0: I can't speak for everyone's needs, but that behavior would suit our needs just fine. As far as our clients are concerned they're giving us a JPEG and expect a JPEG back. If the problem images are truly MPO and not JPEG, they don't know it.
<issue_comment>username_2: From my perspective though, this is a client problem in that you should be validating the images and only serving the ones that are supported. I'm not sure what kind of validation you have in place for that, but it would need to be more than a simple extension check. For example, this could have just as easily been a JPS file, which I don't think Pillow supports. With that in mind, if you did want to support MPO, you could convert that to a JPEG and serve that as the source image to be resized.
Alternatively, I'd want to expand Pilbox to support more of these read-only formats, but I'm still not entirely convinced of the need yet.
<issue_comment>username_0: We don't have any validation other than MIME type/file extension checks before upload. The reason is because these images go directly from the browser to our Amazon S3 storage. The only processing that happens to these images happens via Pilbox behind a CDN. Before we did this, we had a background job triggered after each upload that loaded each image into memory and resized it with rmagick, and as much as we tried to optimize this process it was extremely slow and memory hungry. Pilbox solved this issue and we were super happy to get rid of that background worker all-together and just process images on the fly. The images are NEVER processed on our app server, only on a single Pilbox server for every app. Makes it easy to scale.
So, in order to convert from MPO to JPEG before it hits pilbox, or to load it in memory and check the EXIF/metadata, we would have to re-implement this background job (slowing down the upload workflow again, going a step backwards and costing us more to run another server process other than Pilbox). If there is a way to just make Pilbox do it's best with what it's given, we would prefer it. If that doesn't make sense for the Pilbox philosphy, let me know.
Thanks for bearing with me... :)
<issue_comment>username_2: Not sure what language your application is in, but if it's python, you could use [python-magic](https://github.com/ahupp/python-magic) to check the actual mime type without having to open the entire file in memory (it can just look at the first 1KB of file headers). Though I haven't confirmed that it can correctly identify these MPO files for you.
That said, if you can't accurately identify the file types, there's always going to be a risk that Pilbox can't support the supplied format (whether it supports MPO or not). In that case, you'll need to always expect and handle resizing errors. I do this in my applications by having a placeholder image drawn using a `:before` pseudo element, or loading the image with javascript, or various other ways using HTML/CSS/JavaScript. As I mentioned in https://github.com/username_2/pilbox/issues/35, you can also have an error/not found image that is returned via nginx/apache.
<issue_comment>username_0: The app dealing with uploads is written in Ruby. We can run background workers in other languages, but another thing for us is that we want images to be loadable as soon as possible after they're uploaded. If we have a background step, then when the user load's the page for the first time, they see that the images failed to load. With Pilbox only in the pipeline, all images load the first time, even if there is a small delay in processing. Still, you're right that this is going to be an issue with other image formats that purport to be JPEG.
We're working on client-side fallbacks right now. I thought we'd attack it from the server-side angle also, so thank you again for your nginx example, it will be useful. I didn't think of using a `:before` pseudo-element. That will be a great solution for a generic fallback-image. We'll still have to use JavaScript or Nginx rewriting to get the other functionality we want (fall back to noop of the original image if the processed image can't load).
@wiredfool provided a workaround solution. Adding the following to the top of my Pilbox instance's `config.py` makes my problem images process without error:
```
from PIL import JpegImagePlugin
JpegImagePlugin._getmp = lambda x: None
```
This confirms my initial suspicion, which was that if Pillow returns it as a JPEG it would work as a JPEG.
<issue_comment>username_2: One more alternative, you can create your own custom application which catches specific errors raised from `self.render_image` and calls `self.render_image` again, but ensures that `get_argument` returns `noop` when an error has been encountered.
Here's a generic custom application:
https://github.com/username_2/pilbox#extension
If you need more guidance on how to accomplish what you're trying to achieve with the noop operation, let me know.
<issue_comment>username_2: Actually, since it wasn't much work, here's what it would look like (warning: wrote this without any testing whatsoever)
```python
#!/usr/bin/env python
import tornado.gen
from pilbox.app import PilboxApplication, ImageHandler, \
start_server, parse_command_line
from pilbox.errors import ImageFormatError
class CustomApplication(PilboxApplication):
def get_handlers(self):
return [(r"/", CustomImageHandler)]
class CustomImageHandler(ImageHandler):
def prepare(self):
self.has_error = False
@tornado.gen.coroutine
def get(self, w, h, url):
self.validate_request()
resp = yield self.fetch_image()
try:
self.render_image(resp)
except ImageFormatError:
self.has_error = True
self.render_image(resp)
def get_argument(self, name, default=None):
if name == "op" and self.has_error:
return "noop"
return super(CustomImageHandler, self).get_argument(name, default)
if __name__ == "__main__":
parse_command_line()
start_server(CustomApplication())
```
<issue_comment>username_0: Sweet, I'll try it. I was starting to write my own but my Python is rusty.
<issue_comment>username_0: I got it to work (had to change `def get` to only take one argument). When it catches the exception, it doesn't log the original error, what's the best way to fix that?
<issue_comment>username_2: Looking at the tornado code, I believe it's just calling this:
`self.log_exception(*sys.exc_info())`
You'd want to put that as the first line in the `except` block. You'll also need to `import sys` at the top of the program.
<issue_comment>username_0: @username_2 Works great. I have it working only if `fallback=noop` is present in the original args. Thanks for holding my hand through this. Two last things:
1. What's a good way to return a temporary redirect to a specific URL from my custom handler?
2. When returning a fallback image, I don't want the CDN to cache it. Can I modify the cache headers sent in that case?
<issue_comment>username_2: No problem.
1. The CustomHandler is just a Tornado RequestHandler so you can call anything it supports: [redirect](http://tornado.readthedocs.org/en/latest/web.html#tornado.web.RequestHandler.redirect)
2. That's gonna be tough, you can put a `self.set_header` call in the `except` block before you call `self.render_image`, but if `render_image` sets any headers to the contrary, you'd be out of luck. Basically, you'd have to run the internals of `self.render_image` yourself and write your own headers before drawing the content. But that's dangerous because you'd be depending on "private" methods.
<issue_comment>username_0: I'll have to rethink the caching thing then. Maybe instead of rendering a fallback, I should just redirect to the original image URL. Thanks again.<issue_closed>
<issue_comment>username_0: Hi @username_2 - I don't remember why I left this hanging but the workaround for this issue has been serving us well.
BTW I would like to thank you for all the support and for writing this library! |
<issue_start><issue_comment>Title: Add a WvW overview map
username_0: Either using the GW2 maps API or single hi-res png maps like i had in my WvW browser. I'll put up a proof of concept soon™.
<issue_closed> |
<issue_start><issue_comment>Title: Improperly handling 'allOf' block
username_0: I've got an issue that stems from a schema something like the one below. I can also reproduce on your web-based generator [here](http://json-schema-faker.js.org/).
```
{
"allOf": [{
"required": ["order"],
"properties": {
"order": {
"type": "string"
}
}
}, {
"required": ["id"],
"properties": {
"id": {
"type": "number"
}
}
}]
}
```
If I plug in the proceeding schema, all but the last object listed in the *allOf* array is ignored. The output of above is as follows:
```
{
"id": 75693268
}
```
When it should also include the "order" property outlined in the first object. Any thoughts?
<issue_comment>username_1: Yes, I know that issue and actually your json-schema MUST work as you expect.
I'll fix that asap, thanks for your feedback!
<issue_comment>username_0: Great, thanks!
<issue_comment>username_1: It works! :beers:
http://json-schema-faker.js.org/#{%22allOf%22%3A[{%22required%22%3A[%22order%22]%2C%22properties%22%3A{%22order%22%3A{%22type%22%3A%22string%22}}}%2C{%22required%22%3A[%22id%22]%2C%22properties%22%3A{%22id%22%3A{%22type%22%3A%22number%22}}}]}
<issue_comment>username_0: That was fast! And it works great; thanks!<issue_closed> |
<issue_start><issue_comment>Title: 🛑 lowfee.xcash.rocks is down
username_0: In [`448ad79`](https://github.com/username_0/uptime/commit/448ad798ce4c393b964a12baccf368d4123e7715
), lowfee.xcash.rocks (http://lowfee.xcash.rocks) was **down**:
- HTTP code: 0
- Response time: 0 ms<issue_closed>
<issue_comment>username_0: **Resolved:** lowfee.xcash.rocks is back up in [`ca5f2ea`](https://github.com/username_0/uptime/commit/ca5f2ea20beda003ecb14a6a54b6d59e01039f50
). |
<issue_start><issue_comment>Title: [core] Fix host tags sending
username_0: If the create_dd_check_tags was enabled and if custom tags are passed
in datadog.conf, we have to send the dd_check and host_tags at the same
time.
Otherwise they’ll overwrite each other.
<issue_comment>username_1: I think we can safely remove these lines now: https://github.com/DataDog/dd-agent/blob/3042a1db079dc0771ac7d8db67ea8a70bd220014/checks/collector.py#L176-L179
Other than that it's looking good!
<issue_comment>username_0: good point
<issue_comment>username_0: Done
<issue_comment>username_1: :+1: |
<issue_start><issue_comment>Title: is possible you have 3rd party cookies disabled..(was working just fine before)
username_0: ok is been working again just fine for weeks.. prob months now.. today I went to watch something and I got the "is possible you have 3rd party cookies disabled..." I research how to enable this on firefox and it was enable already... so not sure what happen... any ideas? only thing I did was install externsions to chrome but not firefox.
<issue_comment>username_1: Sorry, I don't get it.
What exactly are you trying to do? What happens? What should happen?
Websites are dynamic. They are often updated at server side, so something could stop working even if you didn't change anything.
<issue_comment>username_0: well let me take a screenshot.. I just try to play one of my movies from google play..
give me 10m and i will post screenshot
<issue_comment>username_0: 

<issue_comment>username_0: This is what I get from the Debug option in flash.
cl=92492083&ts=1430429106&c=WEB&tpmt=0&el=embedded&framer=https%253A%252F%252Fplay%2Egoogle%2Ecom%252Fstore&debug%5FplaybackQuality=small&droppedFrames=0&debug%5FsourceData=&cfps=0&videoFps=0&vq=large&screenw=1920&h=480&debug%5FflashVersion=LNX%2015%2C0%2C0%2C189&screenh=1080&cplayer=AS3&ps=play&pixel%5Fratio=1&cver=as3&w=640&eurl=https%253A%252F%252Fplay%2Egoogle%2Ecom%252Fstore&fexp=900720%2C907263%2C924638%2C933112%2C934954%2C938028%2C9406985%2C9408142%2C948124%2C952612%2C952637%2C952642%2C957201&playerw=853&iframe=1&ei=OShIVY3aCNbqqQXdk4CYDA&cr=US&playerh=480&autoplay=1&idpj=0&scoville=1&lact=%2D1&cpn=PBlcNy78aUFjKY%5FU&stageFps=24&pd=0&ldpj=%2D22&erc=1&mos=0&hl=es%5FES&debug%5Fdate=Mon%20May%204%2022%3A17%3A38%20GMT%2D0400%202015&vis=0&next%5Fid=zseq5bLhaa8&volume=11%2E560000000000002&debug%5FvideoId=zseq5bLhaa8&debug%5Ferror=%5BErrorEvent%20type%3D%22error%22%20bubbles%3Dfalse%20cancelable%3Dfalse%20eventPhase%3D2%20text%3D%22GetVideoInfoError%3AEs%20posible%20que%20tengas%20que%20habilitar%20las%20cookies%20de%20terceros%20si%20las%20tienes%20bloqueadas%2E%22%5D
<issue_comment>username_1: Now I get it, but still don't understand what's caused that. Did you changed version of freshwrapper? If yes, does downgrading solves it? Can you bisect to a commit that caused malfunctioning?
<issue_comment>username_0: No I did not changed version, but after this happen then I did.. when I took this screenshot it was already with new version but was happening in the older one too.
only thing I remember doing is adding chromium plugins but not to firefox...
<issue_comment>username_1: You can try to remove `~/.config/freshwrapper-data` directory contents. Flash player stores data there, including keys for protected content. Probably it's worth to backup it before you delete that directory. If Flash cookies were used, directory removal will reset them.
<issue_comment>username_0: good idea, I'll try that when I get home from work and let you know.. *cruses fingers!*
<issue_comment>username_0: unfurtunatelly that did not work
<issue_comment>username_1: Indeed. I've finally got a movie in Play store. It plays for couple of seconds, then stops with an error.
<issue_comment>username_0: hello, did you figure out the problem? just wondering. :)
<issue_comment>username_1: Nope.
I've tried to decompile player swf. There is one place there where that particular error is displayed, but that function could be called from couple of another places, which in turn could be called from a number of places. Probably that way leads nowhere, but I think key is somewhere inside player swf. From network timeline in Firefox I can tell player decides to stop requesting stream by itself, there is no failed requests near that time.
<issue_comment>username_0: hmm.. this also only started to happen like one week a go..a couple days before i posted this.
<issue_comment>username_1: I had no testcase before recently, so can't tell whenever it worked at all in my case. Now I see it doesn't. As for recent changes in freshwrapper itself, I tested old versions, they still don't work, so it's more likely external reason.
There can be bugs in freshwrapper, which was hidden before, and exposed now, when something changed in Play's player. I don't know. The only way to figure that out is to try to guess what was wrong, and then fix it. Trial-and-error approach. Hard part is that I don't have experience in ActionScript, especially in decompiled sources
<issue_comment>username_1: So there is something related to `crossdomain.xml`. Flash uses that file to enforce request restrictions.
From comparing behavior in Linux and Windows versions, I've found that Windows version requests `https://www.youtube.com/crossdomain.xml`, `http://www.youtube.com/crossdomain.xml`, and `https://i.ytimg.com/crossdomain.xml`, while Linux versions requests `https://www.youtube.com/crossdomain.xml` and `https://s.youtube.com/crossdomain.xml`.
So the difference is request to `https://s.youtube.com/crossdomain.xml`. Immediately after that request, `drm.auth` error is signaled, and playback stops.
Can't say why is that so. Content of `crossdomain.xml` from both `www.youtube.com` and `s.youtube.com` is essentially the same.
<issue_comment>username_1: Noticed request to `http://www.youtube.com/crossdomain.xml`. That request is present in Chrome on Windows, but absent in Firefox on Linux. However, tracing shows that player does request, which is aborted by browser (most probably due to forbidden http requests on https pages). Blocking that exact URL in Chrome on Windows gives similar error.
Need to find a way to change http/https request policy in Firefox.
<issue_comment>username_1: www.youtube.com uses HSTS (HTTP Strict Transport Security) to redirect all http requests to https. And something bad happens inside Firefox when plugin requests http link — plugin just receives an error instead of redirection.
<issue_comment>username_1: Found temporary workaround:
Using [Enforce Encryption](https://addons.mozilla.org/en-US/firefox/addon/enforce-encryption/) Firefox addon it's possible to control whenever redirects happen. To do so, install addon, then open https://www.youtube.com, click on icon in address bar, find and click on "more information" button. In showed dialog, on "Security" under "Privacy & History" label there is a line named "Encrypted connections enforced?" with a button "Stop enforcing". One needs to click that button so the value becomes "No". Then player on https://play.google.com/movies should work.
<issue_comment>username_1: So I conclude, freshplayerplugin does nothing wrong, but Firefox refuses to download requested URL. Although it's possible to bypass browser and do networking directly, I'd like to avoid that.
<issue_comment>username_1: (reported upstream: https://bugzilla.mozilla.org/show_bug.cgi?id=1223114)<issue_closed>
<issue_comment>username_0: what this fix? I am going to close this, feel free to re-open
<issue_comment>username_1: No, it doesn't look fixed. I personally did nothing since reporting bug to Firefox's bug tracker. And there were no any activity on the other side aside checking the test plugin. |
<issue_start><issue_comment>Title: Option to Hide Exit Icon
username_0: I would like another "Control Option" that would hide the "exit" icon displayed when reading.
I frequently use the "scroll down one page" and then "scroll up one page" icons to read and then review what I just read.
The cursor can wander because I get so involved with what I am reading. More often than I would like, I inadvertently click the “exit” icon, which is really not what I want to do since it so distracting.
BTW, “Scroll To Top” is an exceptional extension!
<issue_comment>username_1: hi Stilbon, I'll keep this in mind for the new release.
Thanks
<issue_comment>username_1: Going to add option on the Options page to select one of the below shown configurations.
<issue_closed> |
<issue_start><issue_comment>Title: Fix for epub rejected by Google Play Books, EPUBCheck
username_0: * Updated the build script to use latest pandoc image with latex support
* Fixed some minor issues in the marrkup and svg files
* Would fix the issue #420
<issue_comment>username_0: @username_1 Please review Thanks.
<issue_comment>username_1: I can't actually test this on my m1, but I'll YOLO it in, cut a release and see. Thanks @username_0 !
<issue_comment>username_0: @username_1 I see that the new release does not have the epub file attached as artefact. Could you please check if there is some error during book generation?
Thanks
<issue_comment>username_1: Yes, this is because I moved the project away from Travis but haven't got around to setting it up so GH actions creates the book and puts it in a release. Still need to figure it out actually! Sorry, I will find time to look at this at some point, just struggling with other commitments right now
<issue_comment>username_0: @username_1 No worries..
Please let me know if there is any way I can help in setting up GH Actions. |
<issue_start><issue_comment>Title: i want to inject new line between error message.
username_0: how can i?
<issue_comment>username_1: The error messages are concatenated and I won't change that behavior. So basically, there is no way of separating them and I will not change my code either.
On the other hand, if you find the message to be too long, you could display only the last error. See the [Wiki - Global Options](https://github.com/username_1/angular-validation/wiki/Global-Options) with the option of `displayOnlyLastErrorMsg: true`
<issue_comment>username_1: If you want to make a pull request though, I might accept it... but I just don't want to spend time on this myself.
<issue_comment>username_1: Added a new [Global Option](https://github.com/username_1/angular-validation/wiki/Global-Options) named `errorMessageSeparator`, this separator is HTML based and so if you want a new line, you would use `<br/>`.
Your code could look like this:
```javascript
var vs = new validationService({ errorMessageSeparator: "<br/><li>" });
```
Please note that the Validation Summary will also contain the extra HTML code and to display proper HTML code with an `ngRepeat`, you will need to use `ngSanitize` and `ng-bind-html`<issue_closed>
<issue_comment>username_0: thanks ^^ |
<issue_start><issue_comment>Title: 求助帖 status_code=500
username_0: 摸了一天也无解,早上来发个iss求助QwQ

尝试把setu库clone下来:

注销掉try-except报错如下:

有解决方案当然非常感激,告诉原因也算学到知识,也很感谢哦<issue_closed> |
<issue_start><issue_comment>Title: Timed out while waiting for handshake
username_0: Hi, I would like to upload a file after grunt-less compile files.
But I retrieve this error:
```
Warning: Connection :: error :: Error: Timed out while waiting for handshake Use --force to continue.
```
This is my gruntfile part:
```
watch: {
less: {
files: [
"css/**/*.less"
],
tasks: [
"less:development",
"csso:minify",
"sftp:deploy"
]
}
},
sftp: {
deploy: {
files: {
"./": "*.css"
},
options: {
"path": "<%= sftpCredentials.path_to_upload %>",
"host": "<%= sftpCredentials.host %>",
"username": "<%= sftpCredentials.user %>",
"port": "21",
"password": "<%= sftpCredentials.password %>",
"srcBasePath": "/mybasepath/",
"createDirectories": false,
"readyTimeout": "100000"
}
}
},
```
secret.json:
```
{
"host": "88.333.444.555",
"user": "defaultftp",
"password": "mypwd",
"path_to_upload": "/Users/alessandrominoccheri/Sites/myclient/css/backend/"
}
```
How can I fix this?
<issue_comment>username_1: Same thing with 0.12.2 on win7 |
<issue_start><issue_comment>Title: docker-credential-desktop issue
username_0: ### Version
0.0.4
### What steps are needed to reproduce the behavior?
* I uninstalled my previous version of Docker Destkop in Windows 10. But this did not completely remove the .docker configurations set in my sytem.
* I then installed stevedore and restarted my windows 10 machine
* I went into an existing docker directly on my computer, which has an already tested and working docker-compose.yml file.
* I type in `docker compose up -d` to run it, but my bash terminal responded with the following error
```bash
$ docker compose up -d
[+] Running 0/0
- phpmyadmin Pulling 0.1s
- db Pulling 0.1s
error getting credentials - err: exec: "docker-credential-desktop": executable file not found in %PATH%, out: ``
```
What should I do?
### What you expected?
I expected my docker compose command to fully work like it did prior with Docker Desktop for Windows
### What happened?
It stopped and threw an error
### Additional context
_No response_
<issue_comment>username_1: I think you need to remove `C:\Users\<your username>\.docker\config.json`. That file still has some leftover configuration from Docker Desktop, that no longer works because executables were removed when yoou uninstalled Docker Desktop.<issue_closed> |
<issue_start><issue_comment>Title: Unable to build according to directions on arm64
username_0: Hey Google / esteemed j2objc authors,
I'm excited to try j2objc, but I am running into issues following the build guide. On the [_Building J2ObjC_](https://developers.google.com/j2objc/guides/building-j2objc#building_j2objc_2) step, it refers to `make` commands such as `make dist`, and so forth... and I also saw a suggestion to build a distribution via `scripts/build_distribution.sh`.
I've tried all of these routes, but no matter what I do, I arrive at:
```
building jre_emul.jar
building jre_emul-src.jar
make[1]: ../scripts/gen_java_source_jar.py: No such file or directory
make[1]: *** [/Users/sam/Workspace/j2objc/jre_emul/build_result/jre_emul-src.jar] Error 1
```
I've tried on `master` latest, I've tried at `2.8`, and so on. Surely there's something obvious I'm missing here? `PROTOBUF_ROOT_DIR` is defined, so is `JAVA_HOME`. I'm running on Zulu JDK 11. Protobuf version is `3.19.4`, at latest.
<issue_comment>username_0: note: I would use the distribution release, which works great, but I am not able to run the simulator with a toy app that depends on ObjC generated by j2objc. I get:
```
ld: warning: directory not found for option '-F/Users/sam/Libraries/j2objc/latest/frameworks'
ld: in /Users/sam/Libraries/j2objc/latest/lib/libjre_emul.a(IOSClass.o), building for iOS Simulator, but linking in object file built for iOS, for architecture arm64
```
As I understand it, the distributions no longer contain the full set of symbols, so this makes sense. Thus I set about building j2objc at trunk, but ran into this issue.
FWIW I am an advanced user of Bazel and if there's an easy way to accomplish this via Bazel, I'm happy to go that route.
<issue_comment>username_0: @username_1 any ideas?
<issue_comment>username_1: You just upgraded to MacOS 12.3, right? Apple decided to stop distributing /usr/bin/python (Python 2.7), so the 12.3 upgrade removed it from your system. Feel free to share any feelings you have about that decision with them. ;-)
Unfortunately, one can't always just run 2.7 Python scripts using /usr/bin/python3, as the two major versions aren't compatible. So doing so fixes `scripts/gen_java_source_jar.py`, but as the build continues, `scripts/gen_resource_source.py` fails with a Python TypeError. If you have any Python experience, consider fixing it (it looks straightforward) and [submitting a pull request to contribute your fix](https://developers.google.com/j2objc/guides/how-to-contribute).
A workaround that avoids changing these scripts is to [install Python 2.7 using HomeBrew](https://docs.python-guide.org/starting/install/osx/), then add a softlink so /usr/bin/python points to your HomeBrew python binary (the default is /usr/local/bin).
I should be able to update the scripts next week, if no one beats me to it with a pull request.
<issue_comment>username_0: @username_1 thanks for the quick reply! that makes sense and yes that is, well, apple-esque. i tried to follow your guide here but still got stuck. i'm familiar with python and happy to submit a PR over the weekend, i suspect you will begin hearing about this a lot (this is a new machine, not an update 😉)
the brew guide doesn't seem to work anymore, i'm not sure why. maybe it's because i'm on M1? running
```
brew install python@2
```
gave me
```
➜ j2objc git:(master) brew install python@2
Warning: No available formula with the name "python@2". Did you mean ipython, bpython, jython or cython?
==> Searching for similarly named formulae...
These similarly named formulae were found:
ipython bpython jython cython
To install one of them, run (for example):
brew install ipython
==> Searching for a previously deleted formula (in the last month)...
Error: No previously deleted formula found.
==> Searching taps on GitHub...
Error: No formulae found in taps.
```
womp womp. so i installed [macpython 2.7.18 via `python.org`](https://www.python.org/downloads/release/python-2718/). back in `j2objc`, i ran `make clean` and then `make dist`, which gave me:
```
building jre_emul.jar
building jre_emul-src.jar
make[1]: ../scripts/gen_java_source_jar.py: No such file or directory
make[1]: *** [/Users/sam/Workspace/j2objc/jre_emul/build_result/jre_emul-src.jar] Error 1
make: *** [jre_emul_jars_dist] Error 2
```
same as before. to check sanity, i ran:
```
➜ j2objc git:(master) which python
/usr/local/bin/python
➜ j2objc git:(master) python --version
Python 2.7.18
➜ j2objc git:(master) which python2
/usr/local/bin/python2
➜ j2objc git:(master) which python2.7
/usr/local/bin/python2.7
```
which seems to cover all the options the `Makefile` might be looking for. is there an alias it needs beyond what's present above, or could something else be wrong?<issue_closed> |
<issue_start><issue_comment>Title: MultiChart can corrupt display by re-using charts.
username_0:
<issue_comment>username_0: @username_1 Please Review
<issue_comment>username_1: @username_0 Looks good, however which issue # does this fix specifically? I have an idea of what it might be but would like to make sure.
<issue_comment>username_0: There isn't an issue for it - I just spotted it happening in master. |
<issue_start><issue_comment>Title: :book: Fix the docs for Automatic checks for dependency
username_0: * **Please check if the PR fulfills these requirements**
- [ ] Tests for the changes have been added (for bug fixes / features)
- [x] PR title follows the guidelines defined in https://github.com/ossf/scorecard/blob/main/CONTRIBUTING.md#pr-process
* **What kind of change does this PR introduce?** (Bug fix, feature, docs update, ...)
Doc
* **What is the current behavior?** (You can also link to an open issue here)
* **What is the new behavior (if this is a feature change)?**
Fixed the docs for automatic checks for dependency
* **Does this PR introduce a breaking change?** (What changes might users need to make in their application due to this PR?)
None
* **Other information**:
None |
<issue_start><issue_comment>Title: Feature/optgroup support
username_0: References #370
@username_1 this would be the first set of feature with which adding is not any danger at all I feel.
However there's still one thing that I wouldn't see as "that awesome".
When your data-structure "optgroup" is empty or null it will create an optgroup that has no title. While this is somewhat OK it would likely be more valid to put those empty optgroup entries within NO optgroup block at all.
A further alternative is to provide yet another option key `optgroup_default` where you could define what happens to empty optgroup elements. I.e. `'optgroup_default' => 'Others'` would name the empty group "Others".
The above functionality is already implemented in my branch. You can check it out here:
https://github.com/username_0/DoctrineModule/blob/feature/optgroup-default-container/src/DoctrineModule/Form/Element/Proxy.php#L617-L630
The downside to this is that this is quite a lot of custom code. While fully functional you may consider this too custom and not for a wide enough audience? Just gimme your thouhgt. If it's considered OK, ill write tests for those as well and hit up a follow up PR after this.
<issue_comment>username_1: In my opinion is OK! :)
<issue_comment>username_2: @username_0 awesome! Thanks for this feature :+1:
<issue_comment>username_0: Added the default group handling. Still missing the documentation for that though. Also need more feedback on the proper code-styling of `(string)$foo` vs. `(string) $foo` as well as codeblocks for `@var string` vs. `@var string|null`.
<issue_comment>username_1: Ok! :)
ping @username_2 for the last check :P
Thanks!
<issue_comment>username_2: :100: :ok_hand:
<issue_comment>username_1: Thanks @username_0 for this feature and @username_2 for the help! :)
<issue_comment>username_0: Thanks for accepting it ;) more to come... |
<issue_start><issue_comment>Title: Internal: disallow flow warnings
username_0: We currently have 1 flow warning on master. We didn't catch this because we do allow warnings.

## Test Plan
CI checks pass |
<issue_start><issue_comment>Title: Modify module_directive to be a supertype
username_0: This is built with tree-sitter v0.20.4 and addresses ambiguities that
arise when analyzing a module_directive AST without the full context of
a CST.
This addresses https://github.com/tree-sitter/tree-sitter-java/issues/103.
Checklist:
- [ ] All tests pass in CI.
- [ ] There are sufficient tests for the new fix/feature.
- [ ] Grammar rules have not been renamed unless absolutely necessary.
- [ ] The conflicts section hasn't grown too much.
- [ ] The parser size hasn't grown too much (check the value of STATE_COUNT in src/parser.c).
<issue_comment>username_1: Nice work @username_0 ⚡ |
<issue_start><issue_comment>Title: [automation] update elastic stack version for testing 7.17.2-15b5bc7a
username_0: ### What
Bump stack version with the latest one.
### Further details
[start_time:Thu, 17 Mar 2022 05:13:20 GMT, release_branch:7.17, prefix:, end_time:Thu, 17 Mar 2022 19:23:04 GMT, manifest_version:2.0.0, version:7.17.2-SNAPSHOT, branch:7.17, build_id:7.17.2-15b5bc7a, build_duration_seconds:50984]
<issue_comment>username_0: ## :robot: GitHub comments
To re-run your PR in the CI, just comment with:
- `/test` : Re-trigger the build.
- `/hey-apm` : Run the hey-apm benchmark.
- `/package` : Generate and publish the docker images.
- `/test windows` : Build & tests on Windows.
- `run` `elasticsearch-ci/docs` : Re-trigger the docs validation. (use unformatted text in the comment!)
<!--COMMENT_GENERATED_WITH_ID_comment.id-->
<issue_comment>username_1: /test |
<issue_start><issue_comment>Title: render tex errors in red instead of throwing
username_0: https://github.com/Khan/KaTeX/issues/237
<issue_comment>username_0: looks like we need this: https://github.com/Khan/KaTeX/issues/425
<issue_comment>username_0: until then, we should just copy and fix this function on our own.
<issue_comment>username_0: ended up reverting: https://github.com/twosigma/beaker-notebook/pull/4696
new plan:
1) forget vendorizing and altering auto-render.
2) for TeX cells do it like the PR above did, show it as a red error, not with a popup. throwOnError: false seems to work here, so keep using it.
3) for text cells, if there is a math error then show the bad math source in red and put the error in a tooltip (appears on hover).<issue_closed> |
<issue_start><issue_comment>Title: Is it possible to disable text scaling in BottomNavigationBar?
username_0: So I'm using BottomNavigationBar in my app but I would like to not have text scaled on selected tabs. Is there a way to do that?

<issue_comment>username_1: There is no built-in way to do that. You only can edit `bottom_navigation_bar.dart` by yourself and set it to an static font size.<issue_closed> |
<issue_start><issue_comment>Title: I cant set myself as Admin
username_0: Hello,
i have the problem that i cant set myself as an admin on the avorion server.
I already tried to set up the admin.xml but it just get wiped clean after i start the server.
And i cant use the docker terminal to pass through any commands to the server.
Is there any other way to set me as admin??
I use docker on an synology ds218+
Thanks for the help.
<issue_comment>username_1: Is admin.xml getting wiped clean despite you mounting the data volume?
<issue_comment>username_0: yes
<issue_comment>username_0: all the other files are fine though
<issue_comment>username_1: I'm not sure what the cause is for that. It sounds like a server bug since all the other files are fine. You could try denying write permissions on the file. I don't know if that would cause a different problem though.
I'm not familiar with how docker is run on a synology, have you tried passing in the `--admin steamId` argument to the `ENTRYPOINT` of the docker container?
<issue_comment>username_0: I will try denying the permission after changing the file in the morning.
And how do i pass the argument to the entrypoint?
I tried with exec and start but both didnt work
<issue_comment>username_1: It's passed in when you create the container. `docker run rfvgyhn/avorion --admin steamId` Or if you're using docker-compose, you can use the `command` option (`command: --admin steamId`).
<issue_comment>username_0: I will try that tomorrow and let you know if it worked
<issue_comment>username_0: Changing the permission after editing the admin.xml did nothing the server just made a new one.
And i tried creating the container with the admin command but it just tells me "unknown flag: --admin"
here is what i tried
`docker run -d --name avorion \
-p 27000:27000 \
-p 27000:27000/udp \
-p 27003:27003/udp \
-p 27020:27020/udp \
-p 27021:27021/udp \
-p 27001:27001 \
-v /host/path/saves:/home/steam/.avorion/galaxies/avorion_galaxy \
--admin 76561198066028464 \
rfvgyhn/avorion`
i also tried
`docker run -d --name avorion --admin 76561198066028464 \
-p 27000:27000 \
-p 27000:27000/udp \
-p 27003:27003/udp \
-p 27020:27020/udp \
-p 27021:27021/udp \
-p 27001:27001 \
-v /host/path/saves:/home/steam/.avorion/galaxies/avorion_galaxy \
rfvgyhn/avorion`
am i doing it wrong??
<issue_comment>username_0: I know what was wrong with the admin.xml now. I dont understand why but apparently i cant declare my ingame name as Admin.
This works
`<admin name="admin" id="76561198066028464"/>`
This does not work
`<admin name="username_0" id="76561198066028464"/>`<issue_closed>
<issue_comment>username_1: Just for future reference, when you want to pass in additional arguments, they go after the image name.
```
docker run -d --name avorion \
-p 27000:27000 \
-p 27000:27000/udp \
-p 27003:27003/udp \
-p 27020:27020/udp \
-p 27021:27021/udp \
-p 27001:27001 \
-v /host/path/saves:/home/steam/.avorion/galaxies/avorion_galaxy \
rfvgyhn/avorion \
--admin 76561198066028464
``` |
<issue_start><issue_comment>Title: output PED
username_0: The PED file is a white-space (space or tab) delimited file: the first six columns are mandatory:
Family ID
Individual ID
Paternal ID
Maternal ID
Sex (1=male; 2=female; other=unknown)
Phenotype
from. http://pngu.mgh.harvard.edu/~purcell/plink/data.shtml
If you want to have an 6 columns in AffyPipe output ped file.
Modify this line in below. (suppose.. All samples are non-related;no family)
file_ped.write('FAM '+allids[j]+' 0 0 0 0 '+' '.join(G[j])+'\n')
=>
file_ped.write(allids[j]+' '+allids[j]+' 0 0 0 0 '+' '.join(G[j])+'\n')<issue_closed>
<issue_comment>username_1: Dear username_0,
actually this was intentional. Since variable allids is the "official" (e.g. filename) name of the samples, if researchers need to adjust family or individuals ID, they should use the "--update-ids" PLINK feature ( http://pngu.mgh.harvard.edu/~purcell/plink/dataman.shtml#updatefam).
Either choice (all related / all unrealted) is highly dependent on the samples each user has genotyped. We prefer letting users tailor their final files using innate PLINK solutions.
In any case, thanks for your suggestion!
Ezequiel |
<issue_start><issue_comment>Title: JSDoc Instead of JSON
username_0: **Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
Implement a parser that allows you to use JSDoc-like syntax instead of having to type json in comments.
**Describe alternatives you've considered**
None
<issue_comment>username_1: This is a good idea, we can go ahead with this 😁
<issue_comment>username_2: Hey
I'd like to work on this issue. Could you please provide some more details if possible, so I can better understand what exactly is required. @username_1
<issue_comment>username_1: Sure @username_2 , you can work on this
**Current Working**
Right now to provide a description to the API like (label, description, inputs, outputs) we are using a multiline comment above the API where we pass JSON
**Example**
```js
/*
{
"description":"Checks for token and gets the logged in user",
"inputs":{
"x-auth-token":"The JWT Token in header"
},
"label":"Public",
"outputs":{
"user":"The user object stored in database",
}
}
*/
router.get("/user", auth, (req, res) => {
User.findById(req.user.id)
.select("-password")
.then((user) => {
res.json(user);
});
});
```
Here the problem is, the user has to follow the JSON syntax strictly, because what our code is doing is ```js JSON.parse(textInComment)```
So here are a few problems with this
1. The action won't be able to parse if the JSON syntax is not followed by users
2. There is a need to improve this because JSON is somewhat difficult to write
**Proposed Alternative**
We can implement a parser like [JSDoc](https://jsdoc.app/about-getting-started.html) which is used to provide code definitions for a cleaner code
Their syntax is somewhat like this
```js
/**
* Represents a book.
* @constructor
* @param {string} title - The title of the book.
* @param {string} author - The author of the book.
*/
function Book(title, author) {
}
```
Just that instead of functions we have to apply this for our API definition (which does not change anything in the code except how we parse the data)
<issue_comment>username_2: Okay thanks I'll look into it. Which file currently implements the json parser to add description? @username_1
<issue_comment>username_1: https://github.com/username_1/express-autodocs/blob/master/helpers/demestifyAPI.js
Here the following method gets the job done
```js
function getCommentedParams(index, file) {
let startIndex = index - 3; // will be the character before '*' character
let currentIndex = startIndex;
let params = '';
while (file.charAt(currentIndex) != '*') {
params = params + file.charAt(currentIndex);
currentIndex -= 1;
}
try {
params = JSON.parse([...params].reverse().join('').trim());
return params;
} catch (err) {
console.log(
'\x1b[1m',
'\x1b[31m',
`❌ Make sure the params passed in comments are in proper JSON Format.`,
'\x1b[0m'
);
return null;
}
}
```
This method gets the start Index from where the required comment starts
and it checks for the ending * to get the last index of comment
<issue_comment>username_2: params = JSON.parse([...params].reverse().join('').trim());
Hey, so I didn't understand why exactly are you reversing the params? @username_1
<issue_comment>username_1: Ok that is because the comment is above the API definition , so basically we detect the comment with reference to our API, since it will always be above API we have to back trace it. (you will also notice we are reducing the index)
Then once we get the entire content of the comment, we will have to reverse it because we were scanning it as a bottom-up approach
<issue_comment>username_2: Okay, so I'm trying to test `function demestifyAPI(api, index, file, routeprefix = '')` to make sure it is correctly storying the data in params. Could you give me an example of the (api, index, file) parameters to pass so that I know it's being generated correctly? @username_1
<issue_comment>username_1: Ok wait I will get back to you in few minutes
<issue_comment>username_1: Ok @username_2 here are the params to create the test.
**api**
```js
app.post('/users', auth, function (req, res) {
res.send('User Created');
})
```
**index**
`584`
**file**
```js
const express = require('express');
const mongoose = require('mongoose');
const path = require('path');
const config = require('config');
var queries = require('./routes/api/queries');
const app = express();
// Body Parser Middleware
app.use(express.json());
/*
{
"description":"Creates a new user",
"label":"Private",
"inputs":{
"name":"The username by which he/she logs in",
"password": "A string password"
},
"outputs":{
"userid":"The id of the new user created",
"error":"Error message if any"
}
}
*/
app.post('/users', auth, function (req, res) {
res.send('User Created');
});
// Server Connection
const port = process.env.PORT || 5000;
app.listen(port, () => console.log(`Server Started on port ${port}`));
let handleDelete = (req, res) => res.send('Deleted');
```
<issue_comment>username_1: Also, I feel we should write this as a custom test
<issue_comment>username_2: Sure I can add this once I'm done with the JSDoc task.
<issue_comment>username_2: https://github.com/username_1/express-autodocs/pull/15
I have raised a PR for this issue. Do review it and let me know if any changes are required. @username_1
<issue_comment>username_1: @username_2 can you give an example of how you have defined the API in jsdoc?
<issue_comment>username_2: Yeah sure
So basically
`/**
* Represents a book.
* @constructor
* @param {string} title - The title of the book.
* @param {string} author - The author of the book.
*/`
Would be converted to an object which would be like
`{
title - "The title of the book.",
author - "The author of the book."
}`
<issue_comment>username_1: wait let me try this
<issue_comment>username_2: Let me know if it does not work or if there are any issues @username_1
<issue_comment>username_1: @username_2 I tried to console log the data, but the output is showing {} and there is no data in the output

Let me know if I am doing something wrong
<issue_comment>username_1: ```js
/**
* Represents a book.
* @constructor
* @param {string} title - The title of the book.
* @param {string} author - The author of the book.
*/
app.post('/users', auth, function (req, res) {
res.send('User Created');
});
```
<issue_comment>username_2: The comments seem to be in the wrong format.
It should have a single *
something like this
```
/**
* Represents a book.
* @constructor
* @param {string} title - The title of the book.
* @param {string} author - The author of the book.
*/
```
<issue_comment>username_2: Instead your comments in the picture are
```
/**
* * Represents a book.
* * @constructor
* * @param {string} title - The title of the book.
* * @param {string} author - The author of the book.
**/
```
<issue_comment>username_1: I did try both
```
/**
* Represents a book.
* @constructor
* @param {string} title - The title of the book.
* @param {string} author - The author of the book.
*/
```
<issue_comment>username_1: @username_2 check this image

<issue_comment>username_1: also in json we are giving array for inputs and outputs
like
```js
"inputs":{
"name":"The username by which he/she logs in",
"password": "A string password"
},
"outputs":{
"userid":"The id of the new user created",
"error":"Error message if any"
}
```
Any idea how we can implement this also ?
<issue_comment>username_2: What exactly are you logging and in which file to check? @username_1
<issue_comment>username_1: i am just logging the data variable in the getCommentedParams function you created just before it is returned to check it's value
<issue_comment>username_2: Make sure you are passing the correct index.

it's working correctly in mine
<issue_comment>username_2: 
<issue_comment>username_1: The index is getting detected by the action right? so it should take the correct index anyways
<issue_comment>username_1: ✨ Initialized Express AutoDocs
🔎 Scanning for Base Level APIs...
{}
🆗 Scan Complete - Base Level APIs Scanned 1
🔎 Detecting Routes...
🆗 Scan Complete - Routes Detected 0
🔎 Scanning Routes For API Calls...
✨ Creating Docss ...
```
<issue_comment>username_2: Okay wait let me try checking what is the issue.
<issue_comment>username_1: Please do, is this the expected log ?
<issue_comment>username_2: Okay so there is an issue with the index. The index is not being passed properly. I logged the index and it's always 4 more than the desired index. If you see the first example, the index printed is 260, but it should actually be 256.

<issue_comment>username_2: Okay I got the issue. I'll raise a PR with the correction. @username_1
<issue_comment>username_1: Ok i will first merge it in the dev branch, then merge everything into master after we are done with this issue :)<issue_closed>
<issue_comment>username_1: **Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
Implement a parser that allows you to use JSDoc-like syntax instead of having to type json in comments.
**Describe alternatives you've considered**
None
<issue_comment>username_2: Yeah sure. Let me know if there are any more problems. @username_1
<issue_comment>username_1: Hey @username_2 unfortunately it is still not working in my codebase, same issue
The output of item log and data log
```js
[
'**\n' +
' * Represents a book.\n' +
' * @constructor\n' +
' * @param {string} title - The title of the book.\n' +
' * @param {string} author - The author of the book.\n' +
' '
]
{}
```
<issue_comment>username_1: can you try this as your server.js
```js
const express = require('express');
const mongoose = require('mongoose');
const path = require('path');
const config = require('config');
var queries = require('./routes/api/queries');
const app = express();
// Body Parser Middleware
app.use(express.json());
/**
* Represents a book.
* @constructor
* @param {string} title - The title of the book.
* @param {string} author - The author of the book.
*/
app.get('/sayHello', (req, res) => {
res.send('Hello');
});
// Server Connection
const port = process.env.PORT || 5000;
app.listen(port, () => console.log(`Server Started on port ${port}`));
let handleDelete = (req, res) => res.send('Deleted');
``` and check if you are getting the expected output
<issue_comment>username_2: Can you confirm if your function has this particular line
` file.charAt(currentIndex - 1) == '\r'
`
```
function getCustomParams(index, file) {
// Backtraces for an immediate multi-line comment
let descriptionFound = 'unknown';
let currentIndex = index;
while (descriptionFound == 'unknown') {
if (currentIndex != 0) {
if (
file.charAt(currentIndex - 1) == ' ' ||
file.charAt(currentIndex - 1) == '\n' ||
file.charAt(currentIndex - 1) == '\r'
) {
currentIndex -= 1;
} else if (file.charAt(currentIndex - 1) == '/') {
descriptionFound = true;
} else {
descriptionFound = false;
}
}
}
if (descriptionFound) {
return getCommentedParams(currentIndex, file);
} else {
// If no description is present then return null;
return null;
}
}
```
<issue_comment>username_2: Yes it is working on mine @username_1

<issue_comment>username_1: 
@username_2 this is my entire function
<issue_comment>username_1: woah i am now really unsure why it doesn't work on my machine lol
<issue_comment>username_2: Can you send me your `getCommentedParams `function as well? @username_1
<issue_comment>username_1: ```js
function getCommentedParams(index, file) {
let startIndex = index - 3; // will be the character before '*' character
let currentIndex = startIndex;
let params = '';
let data = {};
while (file.charAt(currentIndex) != '/') {
params = params + file.charAt(currentIndex);
currentIndex -= 1;
}
try {
let item = [...params].reverse().join('').split('\r');
item.forEach(function (param) {
let temp = param.split('-');
if (temp.length == 2) {
temp[0] = temp[0].trim();
temp[0] = temp[0].substring(temp[0].lastIndexOf(' '));
data[temp[0].trim()] = String(temp[1].trim());
}
});
console.log(data);
return data;
} catch (err) {
console.log(
'\x1b[1m',
'\x1b[31m',
`❌ Make sure the params passed in comments are in proper JSDoc Format.`,
'\x1b[0m'
);
return null;
}
}
```
<issue_comment>username_2: So I logged item and data, it displays correctly on mine.

<issue_comment>username_1: did you pull from master?
<issue_comment>username_2: Yes
<issue_comment>username_2: So I copy pasted this function on mine. And it works the same. It's displaying correctly.
<issue_comment>username_1: wait ill just clone the entire project again :cry:
<issue_comment>username_2: Just try restarting vscode again, it should work I guess.
<issue_comment>username_2: So your item is getting logged correctly, only the data is not being logged. So put loggers after item, to check if the item is being properly passed. @username_1
<issue_comment>username_1: ✨ Initialized Express AutoDocs
🔎 Scanning for Base Level APIs...
[
'**\n' +
' * Represents a book.\n' +
' * @constructor\n' +
' * @param {string} title - The title of the book.\n' +
' * @param {string} author - The author of the book.\n' +
' '
]
{}
🆗 Scan Complete - Base Level APIs Scanned 1
🔎 Detecting Routes...
🆗 Scan Complete - Routes Detected 0
🔎 Scanning Routes For API Calls...
✨ Creating Docss ...
```
this is the log of items aswell as data,
Can you try to clone the project again from master branch and check ? :(
<issue_comment>username_1: @username_2 my code is not running items.forEach and I observed that the item log does not have comma's so I think there is some problem near the split
<issue_comment>username_1: The log of the temp variable
```js
[
'**\n * Represents a book.\n * @constructor\n * @param {string} title ',
' The title of the book.\n * @param {string} author ',
' The author of the book.\n '
]
```
<issue_comment>username_2: This is my entire demetifyAPI.js file
```
const vm = require('vm');
function demestifyAPI(api, index, file, routeprefix = '') {
let METHOD = api.split('.')[1].split('(')[0];
let trimmedCall = api.match(/\(.[\s\S]*/g)[0];
trimmedCall = trimmedCall.substring(1, trimmedCall.length - 1);
let CALL = trimmedCall.split(',')[0];
CALL = CALL.substring(1, CALL.length - 1);
let params = getCustomParams(index, file);
return {
method: METHOD,
callName: routeprefix + CALL,
params,
};
}
function getCustomParams(index, file) {
// Backtraces for an immediate multi-line comment
let descriptionFound = 'unknown';
let currentIndex = index;
while (descriptionFound == 'unknown') {
if (currentIndex != 0) {
if (
file.charAt(currentIndex - 1) == ' ' ||
file.charAt(currentIndex - 1) == '\n' ||
file.charAt(currentIndex - 1) == '\r'
) {
currentIndex -= 1;
} else if (file.charAt(currentIndex - 1) == '/') {
descriptionFound = true;
} else {
descriptionFound = false;
}
}
}
if (descriptionFound) {
return getCommentedParams(currentIndex, file);
} else {
// If no description is present then return null;
return null;
}
}
// function getCommentedParams(index, file) {
// let startIndex = index - 3; // will be the character before '*' character
// let currentIndex = startIndex;
// let params = '';
// while (file.charAt(currentIndex) != '*') {
// params = params + file.charAt(currentIndex);
// currentIndex -= 1;
// }
// try {
// params = JSON.parse([...params].reverse().join('').trim());
// console.log('Params:', params);
// return params;
// } catch (err) {
// console.log(
// '\x1b[1m',
// '\x1b[31m',
// `❌ Make sure the params passed in comments are in proper JSON Format.`,
// '\x1b[0m'
// );
// return null;
// }
// }
function getCommentedParams(index, file) {
let startIndex = index - 3; // will be the character before '*' character
let currentIndex = startIndex;
let params = '';
let data = {};
while (file.charAt(currentIndex) != '/') {
params = params + file.charAt(currentIndex);
currentIndex -= 1;
}
try {
let item = [...params].reverse().join('').split('\r');
console.log(item);
item.forEach(function (param) {
[Truncated]
temp[0] = temp[0].substring(temp[0].lastIndexOf(' '));
data[temp[0].trim()] = String(temp[1].trim());
}
});
console.log(data);
return data;
} catch (err) {
console.log(
'\x1b[1m',
'\x1b[31m',
`❌ Make sure the params passed in comments are in proper JSDoc Format.`,
'\x1b[0m'
);
return null;
}
}
module.exports = demestifyAPI;
```
<issue_comment>username_1: @username_2 , I replaced the entire file still the same output :cry:
<issue_comment>username_1: @username_2 , I think we should take a break for sometime and call it a day, i will check if there is some issue on my end, will let you know tomorrow :smile: Thank you
<issue_comment>username_2: So I cloned the repo and ran it again, it's working correctly. Displaying the data properly.

<issue_comment>username_2: Which OS are you using? It could be an issue with the `split('/r')` @username_1
<issue_comment>username_1: I am using ubuntu
<issue_comment>username_1: @username_2 Can it be an OS issue?
<issue_comment>username_2: I'm using windows @username_1
Could you try experimenting with the split function?
Check this link out. Try changing the` /r` to `/n` and try if it works.
https://superuser.com/questions/374028/how-are-n-and-r-handled-differently-on-linux-and-windows
<issue_comment>username_1: @username_2 It did work 😂
<issue_comment>username_1: Now what shall we do keep it `'/r'` or `'/n'`
<issue_comment>username_2: So I guess it was an OS issue 😂
I guess keep it `/n` only because it's better if everything works on your pc.
We'll look into this issue in more detail in the next few days.
Sounds good? @username_1
<issue_comment>username_1: Sure @username_2
Thanks alot
I guess we learnt a lesson here, we should always write tests before code
and we should run tests for all OS 😂
Don't worry ill push the changes in master :D
<issue_comment>username_2: I guess you can close this issue since it's resolved now. Let me know if you're thinking of some other features to add to your project. I'm happy to help in any way I can. @username_1
<issue_comment>username_1: @all-contributors please add @username_2 for code bug ideas maintenance test
<issue_comment>username_2: Sure I'll look into this tomorrow. Tired today. Could you make a new issue regarding this? This one has too many comments now 😂
<issue_comment>username_1: Yes why not, take rest no hurries :smile:<issue_closed>
<issue_comment>username_1: @username_2 checkout Issue #18 and comment there, so I can assign you to it :smile: |
<issue_start><issue_comment>Title: How to use this m-pesa package
username_0: First of all, thank you, Emmanuel for this package and other packages that you release to help other developers. But upon installing this package I find it hard to use it and I have not seen, even in your read me, on how to use this package, if you could help me on this, it would be much appreciated, thanks in advance.
<issue_comment>username_1: Your Welcome username_0,
yes your right the package does not work well because i have not finished it, I got swamped on other projects and found myself delaying.
In the second week of January 2022 this package will be production ready. But if your in a rush i recommend a package by @alphaolomi https://github.com/openpesa/php-pesa
<issue_comment>username_0: Okay, thanks for info and the recommendation. I am looking forward to see you accomplish this package. Have a good time.<issue_closed> |
<issue_start><issue_comment>Title: 網站強致走HTTPS可能會造成的問題
username_0: Mixed Content: The page at 'https://gotyour.pw/' was loaded over HTTPS, but requested an insecure stylesheet 'http://fonts.googleapis.com/css?family=Source+Sans+Pro:300,400,300italic,400italic'. This request has been blocked; the content must be served over HTTPS.
<issue_comment>username_1: 那就換 https 啊
<issue_comment>username_0: 對啊 所以給 sd+
<issue_comment>username_2: 我想一定是我智商太高,只有我知道可以改成 https
<issue_comment>username_0: 別自戀了 快改~~ Chrome連哪一行都寫出來了啊XDD<issue_closed> |
<issue_start><issue_comment>Title: Gosublime Margo build failed
username_0: Today I eventually know that "Margo build failed"(Windows 8) is caused by the Chinese character in my Windows user name.So Gosublime can't create temp building files in "C:\Users\__\AppData\Temp".I have solved my problem by changing it into English word.But I hope this problem can be fixed.
<issue_comment>username_1: Yes, I have the same problem. And change the Chinese user folder name into English is complicated. I hope this problem can be fixed.<issue_closed> |
<issue_start><issue_comment>Title: Replace individual get, post, etc. defns with macro
username_0: @username_1 not sure if this is just stylistic preference =)
<issue_comment>username_1: Hi @username_0
I think I'd rather stick with regular functions, since it makes jumping to definition (`M-.` in emacs) work much better, we've discussed this before in #223 (for some history). |
<issue_start><issue_comment>Title: update git ignore file and replace report template
username_0: Refactor of master -> manager neglected to update the gitignore file, which prevented the report template being found
This PR updates the gitignore file, and replace the report.html file
<issue_comment>username_0: fixed in #901 |
<issue_start><issue_comment>Title: Add background to VaDropdown
username_0: For now, VaDropdown is a simple component which by default has no background.
As you can see in VaSelect is used VaDropdown and stylized with CSS. https://github.com/epicmaxco/vuestic-ui/pull/703
I would like to propose props `background` for VaDropdown. This will allow us to create dropdowns with already defined background styles. By default, we can make the background transparent.
So background props should be `String (which is color)` or `Boolean (which is our default color for background)`.
For example now in Vuestic Admin I can not to simply use VaDropdown and should provide its background via CSS.
<issue_comment>username_0: There is `VaDropdownContent` component for this feature. Need docs. |
<issue_start><issue_comment>Title: Is pg-aws_rds_iam used by anyone in production?
username_0: We're looking at using this, but curious how many others use it in a production-like setting.
We see ~ 16k downloads, how much is known about where this is being used?
Do any of the maintainers specifically use it for your own work?
<issue_comment>username_1: Hi! Yes, I'm using this at [work](https://www.zencargo.com/). We've had it running in two Rails applications in our pre-production environment and one in production for about a month now, and are promoting it to production for the other app next week.
We haven't had any issues with the gem itself, although we did run into a bug on AWS's end - if your database instance is too small for your workload, then IAM authentication can fail (we saw this as 503 responses from the token verification service in the PostgreSQL error logs).
AWS said they're aware of the issue and a fix is in progress, but in the meantime it's worth double-checking your instance type is large enough (we were using a `t3.micro` which was way too small for our load, the freeable memory was next to nothing).
Since we scaled up the database instance to something more sensible, we've had no issues.
I'm not sure if anyone else is running it in production; one person reached out to me via email a while back because they were having difficulty getting the default token generator to pick up their credentials, but otherwise this is the first issue opened on the repo... so either it's working perfectly or no-one is using it 😅
If you do give it a try, let me know how you get on!<issue_closed>
<issue_comment>username_1: Just to confirm we did promote this gem into production for our other Rails app on August, and haven't had any issues at all 🎉 |
<issue_start><issue_comment>Title: Clarify definition of the median
username_0: Clarify definition of the median and turn
```
julia> median(DiscreteUniform(1, 4))
2.5
```
into a bug. Came up in https://github.com/JuliaStats/Distributions.jl/pull/1470#discussion_r783502531
<issue_comment>username_1: Maybe fix the bug as well?
<issue_comment>username_0: Looks like also quantile is wrong there. Let's merge here and continue with #1474.
<issue_comment>username_1: Ah, I didn't realize there's more to fix than the median and a separate issue. |
<issue_start><issue_comment>Title: 🛑 FoodB2B is down
username_0: In [`d4b4cbe`](https://github.com/username_0/status.username_0.kr/commit/d4b4cbe1b00e2e7e13c3db2181b2b7f2918a4f4d
), FoodB2B (https://www.foodb2b.kr) was **down**:
- HTTP code: 503
- Response time: 852 ms
<issue_comment>username_0: **Resolved:** FoodB2B is back up in [`78a2063`](https://github.com/username_0/status.username_0.kr/commit/78a20635020061fb84948b59621a0f77721a35dd
).<issue_closed> |
<issue_start><issue_comment>Title: Feature website
username_0: I used Salvattore on my website project, looks great :)
Here the screen shoot of my project.
: add missing data-garden- attributes to menu item icon elements
username_0: ## Description
Add missing `data-garden-[id/version]` attributes to `StyledItemIcon` components.
## Detail
Closes #1128
## Checklist
- [ ] :ok_hand: ~design updates are Garden Designer approved (add the
designer as a reviewer)~
- [x] :globe_with_meridians: demo is up-to-date (`yarn start`)
- [ ] :arrow_left: ~renders as expected with reversed (RTL) direction~
- [ ] :metal: ~renders as expected with Bedrock CSS (`?bedrock`)~
- [ ] :wheelchair: ~analyzed via [axe](https://www.deque.com/axe/) and evaluated using VoiceOver~
- [ ] :guardsman: ~includes new unit tests~
- [ ] :memo: ~tested in Chrome, Firefox, Safari, Edge, and IE11~ |
<issue_start><issue_comment>Title: Unknown column 'Tagged.fk_table' in 'on clause'
username_0: I'm trying to accomplish a search on several fields, including tags.
When trying to use matching or innerJoinWith Tags, I'm getting that error. Following on the query that is generated by CakePHP, I get the following (fragment of query)
...
INNER JOIN tags_tags Tags ON Tagged.fk_table = 'projects'
INNER JOIN tags_tagged Tagged ON (Projects.id = (Tagged.fk_id) AND Tags.id = (Tagged.tag_id))
...
Resulting in a crash error:
Unknown column 'Tagged.fk_table' in 'on clause'
Thanks!
<issue_comment>username_1: What does your behavior configuration looks like?
<issue_comment>username_0: Same as default one, haven't changed it. After following the instructions and include the behavior in the models, tag saving is working great, just found this bug when using Tags with matching and innerJoinWith like this:
$projects
->where(['LOWER(Projects.name) LIKE' => '%'.$search.'%'])
->orWhere(['LOWER(Projects.objectives) LIKE' => '%'.$search.'%'])
->orWhere(['LOWER(Users.full_name) LIKE' => '%'.$search.'%'])
->orWhere(['LOWER(Tags.label) LIKE' => '%'.$search.'%'])
->orWhere(['LOWER(Products.name) LIKE' => '%'.$search.'%'])
->matching(['Tags'])
->group(['Projects.id']);
If I include the like sentence in the matching/innerJoinWith query as a closure is the same result, trying to extract it as a custom finder. I've given a good 4hs going around and the only solution I found is to customize behavior configuration just for that query. But I was wondering if there's some other workaround that would avoid doing that (because I personally think that's a pretty ugly solution)
Thanks! |
<issue_start><issue_comment>Title: Passing parameters via package.json
username_0: Hi, I'm trying to pass the "module" as parameter on the package.json but I can't find the proper syntax.
How would you do that ?
Thanks<issue_closed>
<issue_comment>username_0: Ok, I've finally managed to do it like this:
```
{
"name": "Blah",
"version": "1.0.0",
"devDependencies": {
"browserify-ng-html2js": "^1.1.0"
},
"browserify": {
"transform": [
[ "browserify-ng-html2js", { "module": "ngPartials" } ]
]
}
}
```
So hard to find the correct syntax in Browserify documentation... awful
Anyway, hope it helps someone
<issue_comment>username_1: Aha, I guess we should add this to the readme
<issue_comment>username_2: +1 |
<issue_start><issue_comment>username_0: 👷 Deploy Preview for *cranky-clarke-9d5b86* processing.
🔨 Explore the source changes: aba5d412467563b39aa6c5a414173260225a2331
🔍 Inspect the deploy log: [https://app.netlify.com/sites/cranky-clarke-9d5b86/deploys/620ecc18707e0900074958ea](https://app.netlify.com/sites/cranky-clarke-9d5b86/deploys/620ecc18707e0900074958ea) |
<issue_start><issue_comment>Title: create legacy init script in dependence of operatingsystem
username_0: Improve debian based distribution support.
fixes #35
<issue_comment>username_1: Thanks! One minor change if you would.
<issue_comment>username_2: doesnt fix #35 unless also template use bin/bash instead of bin/sh, or replace '==' with '=' to avoid 'unexpected operator" error on e.g. ubuntu 14.04
http://stackoverflow.com/questions/1089813/bash-dash-and-string-comparison
<issue_comment>username_1: @username_2 does @username_0 correct the things you noticed?
<issue_comment>username_2: no sorry, but almost, there are still two lines containing '==' in kibana.legacy.service.maincontent.erb which breaks POSIX compliance, both lines are `if [ "$STATUS" == "$PID" ]`
<issue_comment>username_0: @username_2, @username_1 But now! Why I not using TDD :(. A test ensure that are no == in the init-script now. Have also fix a lint-warning and rebase hole request to master.
But why run this script on my debian system - with bash and dash?
<issue_comment>username_2: @username_0, @username_1 works like a charm on ubuntu 14.04 now
<issue_comment>username_1: Thanks guys! |
<issue_start><issue_comment>Title: feat(v5): upgrade to styled-system v5, fix #186
username_0:
<issue_comment>username_1: Oh! You’ll need to update the functions used here too. See the migration guide https://styled-system.com/guides/migrating#style-categories
<issue_comment>username_0: updated to use new grouped layout, updated snapshots as well
but
@media screen and (min-width:40em)
is missing from the new snapshots
<issue_comment>username_0: the old props like `width` won't work anymore?
<issue_comment>username_1: Still not sure what's going on with the snapshots, but I can take this from here. Thanks for working on this! |
<issue_start><issue_comment>Title: CLOUDSTACK-8394: Skip test cases through setUp() instead of setUpClass()
username_0: When the tests are skipped through setUpClass, the statement "raise unittest.SkipTest()" is used. This is seen as exception in the logs. To avoid this, alternatively we can set class level variable in setUpClass and read this variable value in setUp() and skip the test case.
All test cases are run and tested. |
<issue_start><issue_comment>Title: New package: ksnip.ksnip.Continuous version 1.10.0
username_0: - [X] Have you signed the [Contributor License Agreement](https://cla.opensource.microsoft.com/microsoft/winget-pkgs)?
- [ ] Have you checked that there aren't other open [pull requests](https://github.com/microsoft/winget-pkgs/pulls) for the same manifest update/change?
- [X] Have you [validated](https://github.com/denelon/winget-pkgs/blob/master/AUTHORING_MANIFESTS.md#validation) your manifest locally with `winget validate --manifest <path>`?
- [X] Have you tested your manifest locally with `winget install --manifest <path>`?
- [ ] Does your manifest conform to the [1.0 schema](https://github.com/microsoft/winget-cli/blob/master/doc/ManifestSpecv1.0.md)?
###### Microsoft Reviewers: [Open in CodeFlow](https://portal.fabricbot.ms/api/codeflow?pullrequest=https://github.com/microsoft/winget-pkgs/pull/38502) |
<issue_start><issue_comment>Title: Listen for an unload even from the viewer
username_0: The viewer will send an "unload" event to notify the doc that it is about to be unloaded.
This should cause a visibility state change so that analytics can flush itself.
Eventually this should also return saved state that can be used to restore the doc as described in:
https://github.com/ampproject/amphtml/issues/1677
The response can be {'savedState': <savedStateBlob>}.
<issue_comment>username_0: The message name we are using is "willUnload"
<issue_comment>username_1: @username_0 Please advise on priority.
<issue_comment>username_0: Higher than https://github.com/ampproject/amphtml/issues/1677
Discussed and agreed with @malteubl
I think it would be very nice to have for overall stability.
<issue_comment>username_0: I mean @cramforce
<issue_comment>username_0: Name was changed to "willLikelyUnload" as there is a small chance the unload won't happen.
<issue_comment>username_1: @username_0 Is this still needed. I'm going to close assuming it's not, but please let us know if that's not true.<issue_closed> |
<issue_start><issue_comment>Title: Updated copy and links on the landing page
username_0: #### Additions
* New content from Megan
* Added link to the Monthly Payment Worksheet
#### Screenshots


#### Review
@username_1 or @username_3 for content
@username_2 or @virginiacc or @amymok to make sure I didn't break anything
<issue_comment>username_1: :+1: Looks good to me!!
<issue_comment>username_0: @username_2 or @virginiacc are there browser tests I need to update or add to reflect the new links?
<issue_comment>username_2: We don't have browser tests for the "Key tools" links yet. We should just create a new story for that - it's not a huge priority but we could do the 404 tests on those.
<issue_comment>username_3: :+1:
<issue_comment>username_4: Should we add an icon for PDF links? It would be a nice cue for users who click on those "Guide to Closing Forms" and "Monthly Payment Worksheet" links.

<issue_comment>username_2: Yes to PDF icons! We should also make sure the icons are accessible
<issue_comment>username_0: Agreed :+1: We even have a more PDF-like icon that has the letters "PDF" in the little document instead of the lines of text. I can add PDF icons in when I update the sidebar of the landing page today or tomorrow.
<issue_comment>username_4: :+1: Sounds good!
<issue_comment>username_5: I thought we were moving away from the PDF icon because we felt it unreadable at typical inline-with-link sizes?
<issue_comment>username_0: I thought we were going to replace it in the minicon font with a little document without letters. Do you have that design manual issue number handy?
<issue_comment>username_5: I couldn't locate it in Design Manual issues. Not sure that the discussion happened there.
<issue_comment>username_0: Cool. We're going with the download icon. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.