input
stringlengths 1
18.7k
| output
stringlengths 1
18.7k
|
---|---|
is this a K8s plugin?
| Yes
|
Yes
| ya - so you want to use resource manager?
ya, we wanted to add that to `plugin manager.go`
|
ya - so you want to use resource manager?
ya, we wanted to add that to `plugin manager.go`
| I want to write k8s plugin and use resource manager or rely on builtin stuff
|
I want to write k8s plugin and use resource manager or rely on builtin stuff
| so that it can be optional for some CRD’s
|
so that it can be optional for some CRD’s
| I’m more leaning towards using flyte machinery, but want to understand trade off better
|
I’m more leaning towards using flyte machinery, but want to understand trade off better
| let me see if I started a PR on this some time ago
yup - I think moving it to K8s plugin manager is a good idea, as I have found a few K8s CRD’s that do not do any quotas but we want to enforce it
Gleb Kanterov I think we should add resource manager to PluginManager, just that it should be optional
Haytham Abuelfutuh ^ any suggestions here?
Even for ServiceCalls we are moving to a centralized way of handing resource allocations - <https://github.com/lyft/flyteplugins/pull/146>
Gleb Kanterov so you can use <https://github.com/lyft/flyteplugins/blob/master/go/tasks/pluginmachinery/k8s/plugin.go#L21|PluginEntry> to indicate that a plugin wants to use Resource Manager
|
let me see if I started a PR on this some time ago
yup - I think moving it to K8s plugin manager is a good idea, as I have found a few K8s CRD’s that do not do any quotas but we want to enforce it
Gleb Kanterov I think we should add resource manager to PluginManager, just that it should be optional
Haytham Abuelfutuh ^ any suggestions here?
Even for ServiceCalls we are moving to a centralized way of handing resource allocations - <https://github.com/lyft/flyteplugins/pull/146>
Gleb Kanterov so you can use <https://github.com/lyft/flyteplugins/blob/master/go/tasks/pluginmachinery/k8s/plugin.go#L21|PluginEntry> to indicate that a plugin wants to use Resource Manager
| I see, makes sense
Need to dig deep into the code
It answers my question which approach to take
|
I see, makes sense
Need to dig deep into the code
It answers my question which approach to take
| let me know, put a PR and we can collaborate on it
|
let me know, put a PR and we can collaborate on it
| I will be looking into setting up Cloud Config operator, and then creating a plugin for bigquery/dataflow jobs
What I really want is to use resource manager :slightly_smiling_face: Because I can create Dataflow/BigQuery jobs without it now
I don’t know how airflow does it. It can be a big benefit of using Flyte
|
I will be looking into setting up Cloud Config operator, and then creating a plugin for bigquery/dataflow jobs
What I really want is to use resource manager :slightly_smiling_face: Because I can create Dataflow/BigQuery jobs without it now
I don’t know how airflow does it. It can be a big benefit of using Flyte
| Airflow has resource pooling as well
its not the same as Flyte;s Resource manager, but it has a pooling system
|
Airflow has resource pooling as well
its not the same as Flyte;s Resource manager, but it has a pooling system
| Awesome, thanks :+1:
|
Awesome, thanks :+1:
| Hey guys, sorry was dropping off the kids to school... catching up now
A couple of notes:
1. You can specify object count quotas on custom resources using k8s native Resource Quotas: <https://kubernetes.io/docs/concepts/policy/resource-quotas/#object-count-quota> ... so this would be probably my first choice to give you, Gleb Kanterov a naiive limit/backoff semantics
2. I second Ketan that providing Resource Manager to K8s plugin is the way to go... There is a lot of smartness that can be built into that (priorities, queuing... etc.) that's just not there if you only use ResourceQuotas
|
Hey guys, sorry was dropping off the kids to school... catching up now
A couple of notes:
1. You can specify object count quotas on custom resources using k8s native Resource Quotas: <https://kubernetes.io/docs/concepts/policy/resource-quotas/#object-count-quota> ... so this would be probably my first choice to give you, Gleb Kanterov a naiive limit/backoff semantics
2. I second Ketan that providing Resource Manager to K8s plugin is the way to go... There is a lot of smartness that can be built into that (priorities, queuing... etc.) that's just not there if you only use ResourceQuotas
| :+1: good to hear that it’s fine to use resource manager in k8s plugi
|
Have you tried <https://github.com/kubernetes-sigs/kubefed>?
| We have not, we looked at it, looked a little young
But we would love to see how it works
We can work with you
My hope is it will mature this year
|
So we are going to federate only Flink CRD
| So if you federate then it should just work right?
|
So if you federate then it should just work right?
| Yes, we discussed various options.
1. just have one cluster, and create many node pools, and figure out how two teams operating this cluster can cooperate
2. do something in plugin code to use a different cluster,
3. do routing (but then cooperation between becomes harder)
4. do federation
|
Yes, we discussed various options.
1. just have one cluster, and create many node pools, and figure out how two teams operating this cluster can cooperate
2. do something in plugin code to use a different cluster,
3. do routing (but then cooperation between becomes harder)
4. do federation
| My recommendation would be 1, because you can easily extend to 4
So not one cluster - have many - which all look the same
|
My recommendation would be 1, because you can easily extend to 4
So not one cluster - have many - which all look the same
| 3 is hard because we need to replicate the whole setup twice and keep it in sync
|
3 is hard because we need to replicate the whole setup twice and keep it in sync
| Maybe let’s chat about this and I think Anmol Khurana will talk
About lyft setup
|
Maybe let’s chat about this and I think Anmol Khurana will talk
About lyft setup
| and a good thing about (3) is having a separate failure domain for projects using Flink, because we see a high risk in Flink breaking something, because workload is much higher than anything else
|
and a good thing about (3) is having a separate failure domain for projects using Flink, because we see a high risk in Flink breaking something, because workload is much higher than anything else
| right, I think that’s the main point here … is that we might have tasks that have different requirements
|
right, I think that’s the main point here … is that we might have tasks that have different requirements
| Ya but a workflow may compose
Of more than just flink right? I also want to think why we feel separation of cluster is necessary- again just questions
|
Ya but a workflow may compose
Of more than just flink right? I also want to think why we feel separation of cluster is necessary- again just questions
| right multiple flink tasks may be a possibility
|
right multiple flink tasks may be a possibility
| So it’s the fear of Flink killing apiserver :slightly_smiling_face:
|
So it’s the fear of Flink killing apiserver :slightly_smiling_face:
| Ohh not python of big query or ml
Haha - I don’t think flink will kill api server
We should definitely enable crd quotas
I think flink will affect the network
|
Ohh not python of big query or ml
Haha - I don’t think flink will kill api server
We should definitely enable crd quotas
I think flink will affect the network
| yes!
|
yes!
| And so nodepools is a good idea
|
And so nodepools is a good idea
| I was bout to say that the main thing to split is resource management and allocation
the workloads that we might face might affect the other task executions
|
I was bout to say that the main thing to split is resource management and allocation
the workloads that we might face might affect the other task executions
| Why don’t we have a chat next week? I can share my views and how we ran lots of spark
Jobs at lyft
We ran 100k jobs a month
|
Why don’t we have a chat next week? I can share my views and how we ran lots of spark
Jobs at lyft
We ran 100k jobs a month
| Awesome! That sounds good to me!
Should we setup a calendar event?
|
Awesome! That sounds good to me!
Should we setup a calendar event?
| It seems we have many ways out if shit hits the fan, that is good
|
It seems we have many ways out if shit hits the fan, that is good
| Yup
Please schedule
Ya it’s going to be fun, also want to know the size of the expected jobs
|
Yup
Please schedule
Ya it’s going to be fun, also want to know the size of the expected jobs
| I think I’m making some progress with Anand Swaminathan pointer <https://flyte-org.slack.com/archives/CQWCBFWAU/p1613761276031300?thread_ts=1613760807.030100&cid=CQWCBFWAU>
I’m trying to re use the existing `PluginManager` and pass a different `KubeClient`
|
I think I’m making some progress with Anand Swaminathan pointer <https://flyte-org.slack.com/archives/CQWCBFWAU/p1613761276031300?thread_ts=1613760807.030100&cid=CQWCBFWAU>
I’m trying to re use the existing `PluginManager` and pass a different `KubeClient`
| Filipe Regadas are you setting it up to run on a separate cluster? If so can you make that optional please?
|
Filipe Regadas are you setting it up to run on a separate cluster? If so can you make that optional please?
| However, I’m getting
```[3/3] currentAttempt done. Last Error: SYSTEM::resource not found, name [flytesnacks-development/to6zcwcsle-flink-task-3]. reason: <http://flinkclusters.flinkoperator.k8s.io|flinkclusters.flinkoperator.k8s.io> "to6zcwcsle-flink-task-3" not found```
My guess is that I’m not properly setting up the client / cache
yeah it’s optional
or that is the idea
I’m basically hitting this case here <https://github.com/flyteorg/flytepropeller/blob/cdd6fa250981b5ae1481f54794e028dc7b1cff23/pkg/controller/nodes/task/k8s/plugin_manager.go#L232-L238|https://github.com/flyteorg/flytepropeller/blob/cdd6fa250981b5ae1481f54794e028dc7b1cff23/pkg/controller/nodes/task/k8s/plugin_manager.go#L232[…]L238>
this is how I’m creating the client <https://gist.github.com/regadas/ed5f805daac733f445571b02521366dc>
|
However, I’m getting
```[3/3] currentAttempt done. Last Error: SYSTEM::resource not found, name [flytesnacks-development/to6zcwcsle-flink-task-3]. reason: <http://flinkclusters.flinkoperator.k8s.io|flinkclusters.flinkoperator.k8s.io> "to6zcwcsle-flink-task-3" not found```
My guess is that I’m not properly setting up the client / cache
yeah it’s optional
or that is the idea
I’m basically hitting this case here <https://github.com/flyteorg/flytepropeller/blob/cdd6fa250981b5ae1481f54794e028dc7b1cff23/pkg/controller/nodes/task/k8s/plugin_manager.go#L232-L238|https://github.com/flyteorg/flytepropeller/blob/cdd6fa250981b5ae1481f54794e028dc7b1cff23/pkg/controller/nodes/task/k8s/plugin_manager.go#L232[…]L238>
this is how I’m creating the client <https://gist.github.com/regadas/ed5f805daac733f445571b02521366dc>
| Filipe Regadas you need to a few things
1. The CRD should be available/registered on K8s
2. The plugin should be init’ed with the right object <https://github.com/flyteorg/flyteplugins/blob/master/go/tasks/plugins/k8s/spark/spark.go#L426>
|
Filipe Regadas you need to a few things
1. The CRD should be available/registered on K8s
2. The plugin should be init’ed with the right object <https://github.com/flyteorg/flyteplugins/blob/master/go/tasks/plugins/k8s/spark/spark.go#L426>
| right… if I don’t use the remote config it works fine
<https://gist.github.com/regadas/34e2aabfddc4fd996ff3f10c8c43a9db>
(I should work torwards just open up what we have lol)
> The CRD should be available/registered on K8s
yea, the separate cluster have them and the operator is running there as well
|
right… if I don’t use the remote config it works fine
<https://gist.github.com/regadas/34e2aabfddc4fd996ff3f10c8c43a9db>
(I should work torwards just open up what we have lol)
> The CRD should be available/registered on K8s
yea, the separate cluster have them and the operator is running there as well
| yes if you just create a PR, we could iterate on it much faster
hard to help this way
|
yes if you just create a PR, we could iterate on it much faster
hard to help this way
| indeed :sweat_smile:
|
indeed :sweat_smile:
| I actually dont remember Anand Swaminathan’s work in K8s remote k8s client
|
I actually dont remember Anand Swaminathan’s work in K8s remote k8s client
| I think it will be smth dummy that I missed
|
I think it will be smth dummy that I missed
| ya i will have to really look at your code and see how you are setting up the informer
|
ya i will have to really look at your code and see how you are setting up the informer
| Did you register CRD on Flyte cluster?
I’m not sure you did because I haven’t seen PR :)
|
Did you register CRD on Flyte cluster?
I’m not sure you did because I haven’t seen PR :)
| hm we are still using our own flyte deployment for the time being :slightly_smiling_face:
|
We already do something similar in k8s_array plugin
| I see, thanks for the pointer. Filipe Regadas <https://github.com/flyteorg/flyteplugins/blob/master/go/tasks/plugins/array/k8s/executor.go#L185>
|
I see, thanks for the pointer. Filipe Regadas <https://github.com/flyteorg/flyteplugins/blob/master/go/tasks/plugins/array/k8s/executor.go#L185>
| FYI, there are few bugs - not related to this topic in k8s_array that is being fixed. (like log links etc). You might also face the same
|
FYI, there are few bugs - not related to this topic in k8s_array that is being fixed. (like log links etc). You might also face the same
| do I understand correctly that there is one config per cluster, so there is no authz?
|
do I understand correctly that there is one config per cluster, so there is no authz?
| ^ In this implementation. Correct.
Its similar to how Admin creates objects in Flytepropeller.
You would still need to get this setup <https://github.com/flyteorg/flyteplugins/blob/e84585e5ad4522854625b21d84f18c77b520920b/go/tasks/plugins/array/k8s/config.go#L55>
|
Filipe Regadas lets talk here - cc Haytham Abuelfutuh
Thank you for opening up the code <https://github.com/spotify/flyte-flink-plugin/blob/regadas/remote_cluster/pkg/flink/executor.go>
can you please paste the error you see?
AI: Move plugin_manager to pluginmachinery
| with this I’m getting
```[3/3] currentAttempt done. Last Error: SYSTEM::resource not found, name [flytesnacks-development/to6zcwcsle-flink-task-3]. reason: <http://flinkclusters.flinkoperator.k8s.io|flinkclusters.flinkoperator.k8s.io> "to6zcwcsle-flink-task-3" not found```
I can see the resources being spawn in the remote cluster
|
with this I’m getting
```[3/3] currentAttempt done. Last Error: SYSTEM::resource not found, name [flytesnacks-development/to6zcwcsle-flink-task-3]. reason: <http://flinkclusters.flinkoperator.k8s.io|flinkclusters.flinkoperator.k8s.io> "to6zcwcsle-flink-task-3" not found```
I can see the resources being spawn in the remote cluster
| ya its the wrong kubeclient - somehow
|
ya its the wrong kubeclient - somehow
| yeah that’s my assumption that I’m missing something there
|
yeah that’s my assumption that I’m missing something there
| I'm suspecting the resource monitor part... checking
|
I'm suspecting the resource monitor part... checking
| during the Check`ResourcePhase` it fails <https://github.com/flyteorg/flytepropeller/blob/cdd6fa250981b5ae1481f54794e028dc7b1cff23/pkg/controller/nodes/task/k8s/plugin_manager.go#L232-L238|https://github.com/flyteorg/flytepropeller/blob/cdd6fa250981b5ae1481f54794e028dc7b1cff23/pkg/controller/nodes/task/k8s/plugin_manager.go#L232[…]L238>
|
during the Check`ResourcePhase` it fails <https://github.com/flyteorg/flytepropeller/blob/cdd6fa250981b5ae1481f54794e028dc7b1cff23/pkg/controller/nodes/task/k8s/plugin_manager.go#L232-L238|https://github.com/flyteorg/flytepropeller/blob/cdd6fa250981b5ae1481f54794e028dc7b1cff23/pkg/controller/nodes/task/k8s/plugin_manager.go#L232[…]L238>
| <https://github.com/flyteorg/flytepropeller/blob/ff2499a597a9df4b67181543d5251a7578469f73/pkg/controller/nodes/task/k8s/plugin_manager.go#L398-L399|https://github.com/flyteorg/flytepropeller/blob/ff2499a597a9df4b67181543d5251a7578469f73/pkg/controller/nodes/task/k8s/plugin_manager.go#L398[…]L399>
|
<https://github.com/flyteorg/flytepropeller/blob/ff2499a597a9df4b67181543d5251a7578469f73/pkg/controller/nodes/task/k8s/plugin_manager.go#L398-L399|https://github.com/flyteorg/flytepropeller/blob/ff2499a597a9df4b67181543d5251a7578469f73/pkg/controller/nodes/task/k8s/plugin_manager.go#L398[…]L399>
| Filipe Regadas so you are running with the remote cluster right? What happens you run with local cluster?
|
Filipe Regadas so you are running with the remote cluster right? What happens you run with local cluster?
| with the local cluster using those changes runs fine … so I’m suspecting that the client is not properly setup
|
with the local cluster using those changes runs fine … so I’m suspecting that the client is not properly setup
| Do you have finalizer enabled?
i guess you said the object exists, when the failure happens, or just that you saw the resources?
you will have to enable - <https://github.com/flyteorg/flyteplugins/blob/master/go/tasks/pluginmachinery/flytek8s/config/config.go#L53> = True
|
Do you have finalizer enabled?
i guess you said the object exists, when the failure happens, or just that you saw the resources?
you will have to enable - <https://github.com/flyteorg/flyteplugins/blob/master/go/tasks/pluginmachinery/flytek8s/config/config.go#L53> = True
| no I don’t have that
|
no I don’t have that
| Filipe Regadas i sent you a DM
|
Hey everyone!
| welcome Dominik!
|
Thanks for setting this up David Espejo (he/him), I think creating an initial issue or discussion might be a good place to have the conversation in the open.
| there you go: <https://github.com/flyteorg/flyte/discussions/3593>
|
Rahul Mehta training operator is already integrated
| Ah nice - I thought that was only the TF/torch operator, but saw that this includes xgboost (which I guess in turn can support lightgbm, which is what we care about)
We're currently in the process of figuring out if it's easier to distribute lgbm w/ dask or the KF operator, jury's still out
|
Ah nice - I thought that was only the TF/torch operator, but saw that this includes xgboost (which I guess in turn can support lightgbm, which is what we care about)
We're currently in the process of figuring out if it's easier to distribute lgbm w/ dask or the KF operator, jury's still out
| Ohh interesting it uses training operator
But you are right others are not there yet
|
So, during the debugging session, once we reduce the backoff setting to `1s`, all containers started to go through.
We saw a lot of "resource quota exceeded" in the logs.
Our guess is that propeller was requesting all the pods at once and the sum of the pods resources was higher than the resource quota. More investigation is needed. An audit of the propeller config would be helpful as there might be a few default settings that are not in the config.
| it started to go through, but it’s still very slow ~4h for 3000 wfs
|
it started to go through, but it’s still very slow ~4h for 3000 wfs
| The resource quota is limiting too much?
|
The resource quota is limiting too much?
| 8.3h would be a sequential execution, so with max parallelism = 30, I would expect it to go faster
if you check how much the quota is and how much we are requesting, 30 should fit
<https://ghe.spotify.net/datainfra/flyte-load/blob/c64ddbd3f4eed6bdf22afe4f82d4e9e5cd824372/flyte_load/dynamicLP.py#L8-L9>
Babis Kiosidis
|
Hi Ketan Umare!
First of, thanks for the great support that you're providing us with! As you might have guessed from our recent posts, we've successfully run our first small scale Flyte workflows and are currently trying to scale out to our bigger usecases. This is where we've been running into some performance issues, where we are not quite clear whether it's a structural/configuration problem on our side, or whether we've hit a limitation of Flyte.
Roughly speaking, our usecase would be to map a (sub-)workflow with around 15-20 interconnected tasks onto a list of inputs with variable length. So far, we've used a dynamic task to kick off the series of subworkflows, but that does not seem to scale well for a large number of inputs/subworkflows (say 10,000 - resulting in ~200,000 individual tasks).
From what I've understood so far, this is a limitation in Flyte, as even though the dynamic workflow definition is saved to s3, the current state is saved in `etcd`, which will eventually reach the 1.5Mb limit. Also, there is only one Flyte propeller worker designated to the workflow, which runs into performance issues when trying to assess the state of many concurrent tasks. Did I understand
this correctly?
We're now trying to restructure our architecture to also be able to scale out to large workloads. The following solutions are what we've come up with, do you have a gut feeling what would be best trying, or maybe even a different approach?:
1. We could use `map_task`s to run the different steps (say a footprint detection task and a vectorization task) sequentially. However, some of our tasks have multiple inputs (e.g. a task might depend on the output of footprint detection and vectorization) and `map_task`s only support one input at the moment.
2. Write our own "scheduling" logic on top of Flyte, which then in turn will trigger workflow executions. This would take advantage of multiple propeller workers as well as smaller workflow defintions/workflow states. However we're afraid of running into limitations on the Flyte admin side as we would constantly need to query the state of our workflows?
| cc Haytham Abuelfutuh
is this `10.000 - resulting in ~200.000 individual tasks` 10 -> 200 or 10k to 200k?
Bernhard Stadlbauer / Maarten de Jong / Klaus Azesberger would you be open to having a chat about the usecase and help us dive deep and help you guys? We can do like a long whiteboarding session
|
cc Haytham Abuelfutuh
is this `10.000 - resulting in ~200.000 individual tasks` 10 -> 200 or 10k to 200k?
Bernhard Stadlbauer / Maarten de Jong / Klaus Azesberger would you be open to having a chat about the usecase and help us dive deep and help you guys? We can do like a long whiteboarding session
| that's 10k number of inputs (map tiles) where each input (tile) needs to be processed running a workflow of roughly 20 tasks (i think that's still an estimate on the lower end)
preferably we'd like to aggregate these workflows as subworkflows of bigger workflows but from today's PoV we probably cannot do that unless we can somehow work around etcd-limit and probably the grpc message size limit (which i don't understand yet tbh)
|
that's 10k number of inputs (map tiles) where each input (tile) needs to be processed running a workflow of roughly 20 tasks (i think that's still an estimate on the lower end)
preferably we'd like to aggregate these workflows as subworkflows of bigger workflows but from today's PoV we probably cannot do that unless we can somehow work around etcd-limit and probably the grpc message size limit (which i don't understand yet tbh)
| Sorry, I mixed up a comma and a dot, it should be 10k/200k. I also fixed the separator in the original text for future readers.
We would love to meet if that is possible, we would have time for example next week, Monday to Wednesday anytime in our evening/your morning :slightly_smiling_face:
|
Sorry, I mixed up a comma and a dot, it should be 10k/200k. I also fixed the separator in the original text for future readers.
We would love to meet if that is possible, we would have time for example next week, Monday to Wednesday anytime in our evening/your morning :slightly_smiling_face:
| Klaus Azesberger I think in have a few ideas of how to do this
So as a quick thing can you use launchplans instead of subworkflows?
This automatically scales out one workflow crd to many and thus you get 2mb per launchplan
Do you can create a nested structure that can farm out launchplans
Also supporting map tasks with multiple inputs is possible and really available in the backend, question is how to represent in flytekit, we can help with this, seems like a quick win
|
Klaus Azesberger I think in have a few ideas of how to do this
So as a quick thing can you use launchplans instead of subworkflows?
This automatically scales out one workflow crd to many and thus you get 2mb per launchplan
Do you can create a nested structure that can farm out launchplans
Also supporting map tasks with multiple inputs is possible and really available in the backend, question is how to represent in flytekit, we can help with this, seems like a quick win
| Ketan Umare I think we’ll want to do something similar with launchplans. Are there any examples you could share? Thanks
Also would love map_tasks with multiple inputs
|
Ketan Umare I think we’ll want to do something similar with launchplans. Are there any examples you could share? Thanks
Also would love map_tasks with multiple inputs
| Cc Yee do we know what blocks multi-input map tasks. Need for tuples?
|
Cc Yee do we know what blocks multi-input map tasks. Need for tuples?
| We were thinking we will stick with the pythonic way of handling these and create a partial construct…
so you can say something like:
```map_task(partial_task(my_task, input1=static_input1, input2=static_input2), ...)```
|
We were thinking we will stick with the pythonic way of handling these and create a partial construct…
so you can say something like:
```map_task(partial_task(my_task, input1=static_input1, input2=static_input2), ...)```
| Haytham Abuelfutuh but this does not cover an array of tuples
Bernhard Stadlbauer the problem with performance is usually not the propeller worker, but the throttling that happens, because of downstream system like K8s.
|
Haytham Abuelfutuh but this does not cover an array of tuples
Bernhard Stadlbauer the problem with performance is usually not the propeller worker, but the throttling that happens, because of downstream system like K8s.
| we support data classes, right? so you can choose to take a dataclass as a single input, and build an array of them as the input to the map_task… but what I’ve seen being asked is how to fill in some common fields (config,… etc.) and only “map” on one of the inputs… for that the partial syntax looks more ergonomic IMHO.
|
we support data classes, right? so you can choose to take a dataclass as a single input, and build an array of them as the input to the map_task… but what I’ve seen being asked is how to fill in some common fields (config,… etc.) and only “map” on one of the inputs… for that the partial syntax looks more ergonomic IMHO.
| yes i agree
I think the missing part is, that dataclasses do not support our other Flyte types today - like FlyteFile, FlyteDirectory, FlyteSchema & Enum. But I think we can start supporting it. cc Kevin Su? what do you think?
|
yes i agree
I think the missing part is, that dataclasses do not support our other Flyte types today - like FlyteFile, FlyteDirectory, FlyteSchema & Enum. But I think we can start supporting it. cc Kevin Su? what do you think?
| I think we should add it, many people want to use complex data type in dataclass. Let me do it.
|
I think we should add it, many people want to use complex data type in dataclass. Let me do it.
| Ya
Let’s create an issue
Are you around now?
|
Ya
Let’s create an issue
Are you around now?
| yes
|
yes
| Ketan Umare Complex datatype support is something we’ve come across as well. I hacked together a generic “MapTransformer” to support arbitrary data classes that contain FlyteFile fields
The usage is as follows:
```@dataclass
class Foobar:
id: str
myfile: FlyteFile
x: int
# Create a new transformer class for the
# specific data type you want to transform
class FoobarTransformer(MapTransformer[Foobar]):
def __init__(self):
super().__init__(name="foobar-transform", t=Foobar)
# Register transformer with Flyte type engine
TypeEngine.register(FoobarTransformer())```
|
Ketan Umare Complex datatype support is something we’ve come across as well. I hacked together a generic “MapTransformer” to support arbitrary data classes that contain FlyteFile fields
The usage is as follows:
```@dataclass
class Foobar:
id: str
myfile: FlyteFile
x: int
# Create a new transformer class for the
# specific data type you want to transform
class FoobarTransformer(MapTransformer[Foobar]):
def __init__(self):
super().__init__(name="foobar-transform", t=Foobar)
# Register transformer with Flyte type engine
TypeEngine.register(FoobarTransformer())```
| nice Nicholas LoFaso / we decided to prioritize this in the Dataclass and we will work on this soon. Kevin Su is aware. Can you chime in on the issue - <https://github.com/flyteorg/flyte/issues/1521>
|
nice Nicholas LoFaso / we decided to prioritize this in the Dataclass and we will work on this soon. Kevin Su is aware. Can you chime in on the issue - <https://github.com/flyteorg/flyte/issues/1521>
| will do
|
Hi! Question. How did we decide to go about `"Will not fast follow, Reason: Wf terminated? false, Version matched? true",`
we see it again on the WF which seem stuck
(just a heavy execution task ….not a load test)
| I don’t think that is a bug, it will happen at times and is expected
So what do you mean by stuck, the other day it was stuck because the cluster was out of resources and there seems to be a bug or issue in the way resource quotas are administered
|
I don’t think that is a bug, it will happen at times and is expected
So what do you mean by stuck, the other day it was stuck because the cluster was out of resources and there seems to be a bug or issue in the way resource quotas are administered
| The expected execution time of a task ~1h, but it’ s on 3h now
|
The expected execution time of a task ~1h, but it’ s on 3h now
| Is it running?
|
Is it running?
| it’s in a running state
|
it’s in a running state
| I mean the pod
|
I mean the pod
| So what I see is… (sorrry for a long discussion again)
while the pod is not there …
kubectl -n ubi-pipelines-production get pods | grep 6s
returns nothing …..
checking the setup….
I see all the other pods but not that one …
probably the setup…
was me miss-understanding the setup. Please forget my question. Sorry.
|
So what I see is… (sorrry for a long discussion again)
while the pod is not there …
kubectl -n ubi-pipelines-production get pods | grep 6s
returns nothing …..
checking the setup….
I see all the other pods but not that one …
probably the setup…
was me miss-understanding the setup. Please forget my question. Sorry.
| are we good?
|
Anastasia Khlebnikova / Julien Bisconti / Bernhard Stadlbauer / Jeev B please go through <https://docs.flyte.org/en/latest/deployment/cluster_config/performance.html#deployment-cluster-config-performance|this> doc and the <https://docs.flyte.org/en/latest/concepts/execution_timeline.html#divedeep-execution-timeline|accompanying> doc. Let me know if it makes sense / helps
Pradithya Aria Pura you should also optimize the configuration when and if you see any problems
| Ketan Umare This doc is really good! Thank you for taking the time to put this together, it really helps.
My only comments is that the round latency is more critical than it seems, the definition of it is in the "Signs of slowdown" section, the concept is explained in "Timeline of a workflow execution" where it is not mentioned explicitly
|
Ketan Umare This doc is really good! Thank you for taking the time to put this together, it really helps.
My only comments is that the round latency is more critical than it seems, the definition of it is in the "Signs of slowdown" section, the concept is explained in "Timeline of a workflow execution" where it is not mentioned explicitly
| 5 cents from my side: would be cool to have defaults values there plus the reasoning behind increasing or decreasing the value based on some factors?
Example:
```admin-launcher.tps, admin-launcher.cacheSize, admin-launcher.workers```
if we want to fine tune them …. what the reasoning should be?
|
Hey Team, we are currently converting the dashboard from PromQL to (Spotify specific TSDB) and I was wondering which interval do you use to scrape the metrics in Prometheus ?
10s
30s
1m
5m
10m
other
emoji voting :point_up: :slightly_smiling_face:
| I would prefer 30s, but I think we use 1/5
|
I would prefer 30s, but I think we use 1/5
| From what I saw in the dashboards, it seems FlyteAdmin is 5m and propeller 1m
|
<https://anyscale-dev.dev/login>
Hi i'm unable to reach this url for training , can someone please help ?
| <@U026CP5D1MF> What error are you seeing?
|
<@U026CP5D1MF> What error are you seeing?
| was listening to the presentation...
This site cant be reached
`ERR_CONNECTION_REFUSED`
weird, this seems to be on my personal laptop only :disappointed:
|
was listening to the presentation...
This site cant be reached
`ERR_CONNECTION_REFUSED`
weird, this seems to be on my personal laptop only :disappointed:
| Does it work from a different computer?
|
Does it work from a different computer?
| yh, on the same wifi :shrug:
|
yh, on the same wifi :shrug:
| Hmm, very weird, can you use the other computer?
|
Hmm, very weird, can you use the other computer?
| switching over to another machine :man-surfing:
|
I did not receive any email(s) on the training material
| let me check
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.