input
stringlengths 1
18.7k
| output
stringlengths 1
18.7k
|
---|---|
Hi Nelson Arapé Hongxin Liang (from Spotify) meet Nikoloas and Matthias from arabesque
| Cool! Thanks for the intro's Ketan! Hey guys, nice meeting you :smile:
|
Cool! Thanks for the intro's Ketan! Hey guys, nice meeting you :smile:
| Same here. Nice meeting you.:wave:
|
Same here. Nice meeting you.:wave:
| I'll probably kick off going through the docs next week or the week after, try and get Flyte working on GCP and write some simple DAGs
Have you guys made some headway already?
|
I'll probably kick off going through the docs next week or the week after, try and get Flyte working on GCP and write some simple DAGs
Have you guys made some headway already?
| We haven’t ported different parts of flyte to GCP, and so far only deployed flyte to GKE.
The plan is to replace minio with gcs and postgres with cloudsql
|
We haven’t ported different parts of flyte to GCP, and so far only deployed flyte to GKE.
The plan is to replace minio with gcs and postgres with cloudsql
| Sounds good. Can't wait to get my hands on the docs :slightly_smiling_face:
I'll let you know when we get started
|
Sounds good. Can't wait to get my hands on the docs :slightly_smiling_face:
I'll let you know when we get started
| awesome!
|
Hey everyone!
Thanks for the invite Ketan Umare
I'm Adhita and I work for Cisco on Kubeflow.
Curious about the typed SDK and plugins for Spark, k8s in Flyte
| hey Adhita, I’m the main author of the python SDK. I can fill you in a bit on where we are now, where we hope to see it go. I’d also be interested in hearing a bit about your use cases to inform our work in this area. I also am pretty knowledgable about our plugin model and how to approach authoring new ones (if you’re interested in doing so). let me know if you’d like to set up some time to chat or we can talk here!
|
hey Adhita, I’m the main author of the python SDK. I can fill you in a bit on where we are now, where we hope to see it go. I’d also be interested in hearing a bit about your use cases to inform our work in this area. I also am pretty knowledgable about our plugin model and how to approach authoring new ones (if you’re interested in doing so). let me know if you’d like to set up some time to chat or we can talk here!
| Hey Matt
I'd love to chat about authoring plugins for the operators in Kubeflow
|
Hey Matt
I'd love to chat about authoring plugins for the operators in Kubeflow
| Hey Adhita Selvaraj sorry I missed this, I was on vacation. Do you have any availability to VC? I think that’d be super interesting to work on.
|
Hey Adhita Selvaraj sorry I missed this, I was on vacation. Do you have any availability to VC? I think that’d be super interesting to work on.
| Hey
Are you available sometime today? I can move things around in the afternoon
|
Hey
Are you available sometime today? I can move things around in the afternoon
| unfortunately not today (have a bunch of meetings), but definitely can find time next week
|
unfortunately not today (have a bunch of meetings), but definitely can find time next week
| Ok cool
|
Hi Adhita, welcome to Flyte. Awesome to know you work on Kubeflow. We would love to build support for various distributed Ml operators like katib, mpioperator (or maybe just podgroup), tf operator into Flyte
| I'm looking at the integration of TF-Operator into Flyte, first. Would a plugin for each operator be a good way of interacting with the operators?
|
I'm looking at the integration of TF-Operator into Flyte, first. Would a plugin for each operator be a good way of interacting with the operators?
| Adhita Selvaraj yes that is exactly what we are intending to do
we are soon going to announce one plugin
that will serve as a good example
let me share an example with you
this is how we integrated spark into flyte -<https://github.com/flyteorg/flytek8ssparkplugin>
|
Adhita Selvaraj yes that is exactly what we are intending to do
we are soon going to announce one plugin
that will serve as a good example
let me share an example with you
this is how we integrated spark into flyte -<https://github.com/flyteorg/flytek8ssparkplugin>
| Awesome, thanks!
|
Awesome, thanks!
| again, we have to finally have all the code in flyteplugins repo (sad part about go plugins), but we will clean this up
i will update this repo (sparkk8splugin) to show this can be done and tested independently
and then we can merge it in
|
again, we have to finally have all the code in flyteplugins repo (sad part about go plugins), but we will clean this up
i will update this repo (sparkk8splugin) to show this can be done and tested independently
and then we can merge it in
| I'll take a look at this and get started on a tf-operator plugin
|
I'll take a look at this and get started on a tf-operator plugin
| that is amazing
I am super interested in that, so any help you need will be available
|
that is amazing
I am super interested in that, so any help you need will be available
| Thank you so much
|
Thank you so much
| the only problem is I am on paternity, but we will help you
Adhita Selvaraj let us create an issue, and we could start you as the owner of that issue, and once we do that we will update the docs with the canonical way
:slightly_smiling_face:
|
the only problem is I am on paternity, but we will help you
Adhita Selvaraj let us create an issue, and we could start you as the owner of that issue, and once we do that we will update the docs with the canonical way
:slightly_smiling_face:
| Awesome that sounds good :raised_hands:
|
Awesome that sounds good :raised_hands:
| :+1: , Also i will share the work with have done with some other team (it is under NDA) so cannot share more details, that is one of the first integrations that will come out soon
but they were able to do this in complete isolation
give me this weekend
monday you can start, what say?
|
:+1: , Also i will share the work with have done with some other team (it is under NDA) so cannot share more details, that is one of the first integrations that will come out soon
but they were able to do this in complete isolation
give me this weekend
monday you can start, what say?
| Yeah that sounds good :+1::skin-tone-4:
|
Yeah that sounds good :+1::skin-tone-4:
| <https://github.com/lyft/flyte/issues/115>
can you give me your handle
i will assign this to you
|
<https://github.com/lyft/flyte/issues/115>
can you give me your handle
i will assign this to you
| swiftdiaries
Thanks
|
swiftdiaries
Thanks
| weirdly i am unable to assign it to you, thats ok i will figure out the mechanics
i cced you just ack it
|
weirdly i am unable to assign it to you, thats ok i will figure out the mechanics
i cced you just ack it
| Done :slightly_smiling_face:
|
Done :slightly_smiling_face:
| thank you
i am just updating a template plugin so that you can use that to get started
|
thank you
i am just updating a template plugin so that you can use that to get started
| Oh yeah I saw that in the flyteorg repo, thank you
|
Oh yeah I saw that in the flyteorg repo, thank you
| ya i am updating it
let me do it
Adhita Selvaraj <https://github.com/flyteorg/flytepluginexample>
ehre you go
|
ya i am updating it
let me do it
Adhita Selvaraj <https://github.com/flyteorg/flytepluginexample>
ehre you go
| Awesome, thank you! I'll follow this as a guide
|
Awesome, thank you! I'll follow this as a guide
| you should be able to ` make propeller_compile` and compile your go code
just copy this into your repo
Matt Smith should be able to help you with the python side of code, he is out for the next week
Let me create a new channel for this Adhita Selvaraj
|
you should be able to ` make propeller_compile` and compile your go code
just copy this into your repo
Matt Smith should be able to help you with the python side of code, he is out for the next week
Let me create a new channel for this Adhita Selvaraj
| Oh yeah, this thread getting too long xD
|
hi, any reason that we don’t generate rpc stub for java here <https://github.com/lyft/flyteidl/tree/master/gen/pb-java/flyteidl>
| my thinking is we just don’t use java yet and perhaps we were struggling with the gen config or something, so we disabled to unblock. if you want to enable and get them building, that would be awesome!
|
my thinking is we just don’t use java yet and perhaps we were struggling with the gen config or something, so we disabled to unblock. if you want to enable and get them building, that would be awesome!
| yeah we would like to get that fixed.
i will try to modify the docker image directly because the repo producing that docker image is not opensourced.
from our side, we took all the protos and generated java files using maven. things seem to work so far.
|
yeah we would like to get that fixed.
i will try to modify the docker image directly because the repo producing that docker image is not opensourced.
from our side, we took all the protos and generated java files using maven. things seem to work so far.
| hmm interesting, which docker image is it?
|
hmm interesting, which docker image is it?
| lyft/protocgenerator:5e6a3be18db77a8862365a19711428c2f66284ef
<https://hub.docker.com/r/lyft/protocgenerator>
|
lyft/protocgenerator:5e6a3be18db77a8862365a19711428c2f66284ef
<https://hub.docker.com/r/lyft/protocgenerator>
| thank you
|
thank you
| hmm, grpc_java_plugin is not installed in the image
so i tried something like this
```$ apk add --no-cache -X <http://dl-cdn.alpinelinux.org/alpine/edge/testing> grpc-java
--- entrypoint.py.origin 2019-10-16 23:52:23.000000000 +0200
+++ entrypoint.py 2019-11-13 20:16:33.000000000 +0100
@@ -77,9 +77,10 @@
"--protodoc_out="+output_dir]
else:
protoc_args.append("--"+args.language+"_out="+output_dir)
- if args.language != "java":
- protoc_args.append("--grpc_out=" + output_dir)
- protoc_args.append("--plugin=protoc-gen-grpc="+ shutil.which("grpc_"+args.language+"_plugin"))
+ protoc_args.append("--grpc_out=" + output_dir)
+
+ plugin_name = "grpc_" + args.language + "_plugin" if args.language != "java" else "protoc-gen-grpc-java"
+ protoc_args.append("--plugin=protoc-gen-grpc=" + shutil.which(plugin_name))
# Generates the validate methods.
if args.validate_out:```
this however will make the image larger because of openjdk
|
hmm, grpc_java_plugin is not installed in the image
so i tried something like this
```$ apk add --no-cache -X <http://dl-cdn.alpinelinux.org/alpine/edge/testing> grpc-java
--- entrypoint.py.origin 2019-10-16 23:52:23.000000000 +0200
+++ entrypoint.py 2019-11-13 20:16:33.000000000 +0100
@@ -77,9 +77,10 @@
"--protodoc_out="+output_dir]
else:
protoc_args.append("--"+args.language+"_out="+output_dir)
- if args.language != "java":
- protoc_args.append("--grpc_out=" + output_dir)
- protoc_args.append("--plugin=protoc-gen-grpc="+ shutil.which("grpc_"+args.language+"_plugin"))
+ protoc_args.append("--grpc_out=" + output_dir)
+
+ plugin_name = "grpc_" + args.language + "_plugin" if args.language != "java" else "protoc-gen-grpc-java"
+ protoc_args.append("--plugin=protoc-gen-grpc=" + shutil.which(plugin_name))
# Generates the validate methods.
if args.validate_out:```
this however will make the image larger because of openjdk
| Discussed offline with team. We want to pull those tools into flytetools which is open source
And from there we can contribute easily to the same docket image
|
Discussed offline with team. We want to pull those tools into flytetools which is open source
And from there we can contribute easily to the same docket image
| that’s awesome!
|
that’s awesome!
| How blocked are you on this? Kinda busy over here getting ready for kubecon
|
How blocked are you on this? Kinda busy over here getting ready for kubecon
| no at all
as i said we generated those ourselves.
|
no at all
as i said we generated those ourselves.
| Ok great. We have it on the list
|
Ok great. We have it on the list
| :thumbsup:
|
i checked the docker image and there is a special treatment to ignore `java`.
| how urgent is this?
spent some time taking a look at this just now.
the reason we don’t do it is because the image that we’re using doesn’t have the java grpc compiler installed.
in order to install it, we’ll need to do these steps: <https://github.com/grpc/grpc-java/tree/master/compiler>
|
how urgent is this?
spent some time taking a look at this just now.
the reason we don’t do it is because the image that we’re using doesn’t have the java grpc compiler installed.
in order to install it, we’ll need to do these steps: <https://github.com/grpc/grpc-java/tree/master/compiler>
| Yeah I managed to hack the image and get it work. It wasn’t complicated. So later if you can have that image open sourced, I can send a PR.
|
Awesome demo tonight, guys. I got some great questions from one of our Sr. Directors:
* How does Flyte validate code quality and readiness before operational execution?
* Is Flyte primarily for training or can it be leveraged for operational execution as well?
Operational execution == model serving.
| I think I can expand a bit on Ketan's answer. In addition to the type checking and unit testing which can be leveraged in a tight iteration loop to come to a workflow with most bugs shaken out, we additionally make it easy to configure different domains to which your workflow can be deployed. These domains are flexible (think production, staging, canary, shadow, etc.) and can be added/removed as needed for your use cases. Thanks to the configurability and prameterization of workflows, it is easy to overlay constraints to ensure safety in deploying a workflow into a testing partition for validation (data access restrictions, resource allocation, etc,)--and the semantics thereof can be defined by your CI processes. From there, workflows can be run against real data at scale and emit metrics and outputs which can be observed by a QA process. And, assuming a production-ready Flyte deployment, that is easy to implement (I've done it :p)
For operational execution, I too would like to hear a bit more about your specific use case and the distinction between training, operational execution, and model serving. But I can say this: at lyft we regularly retrain models and execute other processes on Flyte that directly impact business operations. We also do ad hoc training and experimentation. generally speaking, flyte is a component in the live behavior of the business. How flyte emits the produced artifacts into services dealing with user traffic is varied by use case, but we have an awesome project called Data Catalog where we are working to provide a link between the artifacts complicated pipelines create and the services that depend on those artifacts. This service understands the parameters and versioning applied to an artifact thus making it easy to manage and query model artifacts over time. Further, it provides a simple API by which a service can retrieve the latest and greatest model/artifact for its specific need. And we are looking to integrate it more directly with Flyte going forward.
|
I think I can expand a bit on Ketan's answer. In addition to the type checking and unit testing which can be leveraged in a tight iteration loop to come to a workflow with most bugs shaken out, we additionally make it easy to configure different domains to which your workflow can be deployed. These domains are flexible (think production, staging, canary, shadow, etc.) and can be added/removed as needed for your use cases. Thanks to the configurability and prameterization of workflows, it is easy to overlay constraints to ensure safety in deploying a workflow into a testing partition for validation (data access restrictions, resource allocation, etc,)--and the semantics thereof can be defined by your CI processes. From there, workflows can be run against real data at scale and emit metrics and outputs which can be observed by a QA process. And, assuming a production-ready Flyte deployment, that is easy to implement (I've done it :p)
For operational execution, I too would like to hear a bit more about your specific use case and the distinction between training, operational execution, and model serving. But I can say this: at lyft we regularly retrain models and execute other processes on Flyte that directly impact business operations. We also do ad hoc training and experimentation. generally speaking, flyte is a component in the live behavior of the business. How flyte emits the produced artifacts into services dealing with user traffic is varied by use case, but we have an awesome project called Data Catalog where we are working to provide a link between the artifacts complicated pipelines create and the services that depend on those artifacts. This service understands the parameters and versioning applied to an artifact thus making it easy to manage and query model artifacts over time. Further, it provides a simple API by which a service can retrieve the latest and greatest model/artifact for its specific need. And we are looking to integrate it more directly with Flyte going forward.
| Thanks for the additional context! The concept of a validation domain sounds really helpful. Our case use is that we have profoundly strict requirements for models being promoted to production. The validation pipeline is deliberately robust and uncompromising, which makes sense given the data.
|
Thanks for the additional context! The concept of a validation domain sounds really helpful. Our case use is that we have profoundly strict requirements for models being promoted to production. The validation pipeline is deliberately robust and uncompromising, which makes sense given the data.
| ok cool, Alexander Perlman! so in that case, one pattern that is popular is this: create 3 workflows. One workflow for the actual computation of data and creation of artifacts, one workflow for validating the artifacts, and a final workflow which commits the artifacts. These pipelines can be parameterized and generalized in any way you see fit.
Once you are happy with each individual pipeline, it is easy to compose them into a large workflow. That workflow will first run computation on data and produce artifacts as outputs, then those outputs can be fed into the validation workflow. If the validation workflow doesn’t like what it sees, it can fail itself and the macro-workflow will short-circuit. Alternatively, it can continue but provide a signal not to use the produced artifact--or provide an alternate artifact to commit. Then if the validation workflow allowed the macro-workflow to continue, it will move on to the commit stage.
And that’s just what is possible now--we’d like to build towards having pre/post validators. We’d like to finish our implementation of conditional and error-handling behavior in workflows. We’d also like to work towards workflows that are triggered in reaction to events.
P.S. I think the auditability and hermeticism provided by Flyte could be a major benefit when dealing with data of such standards.
|
Hey everyone. :wave: Great talks at KubeCon today. I'm excited to start digging into this.
| Welcome! Please don't hesitate to ask us any questions.
|
Curious to know what people see as the differences between kubeflow and flyte?
| Hi Oliver- great question. IMO kubeflow is an umbrella with components besides the compute portions of model training and pipelines. Flyte is more directly comparable to kubeflow pipelines. Flyte is opinionated about the pipelines and we feel we offer an extremely differentiated and battle tested product in this regard
But, other parts of kubeflow make perfect sense - like of serving and they should be complementary to Flyte. In some world we do see Flyte being one of the supported computational and pipeline framework in kubeflow.
With our artifact caching, lineage tracking, multi cluster and tenant support, deep sdk and type system we are ahead of kubeflow pipelines in features but more focused on this problem
|
Thanks for the in-depth responses, Ketan Umare and Matt Smith! Is there any documentation on your authentication / authorization workflow? Do you have tie-ins to dex / LDAP? Is there group-based authorization so that multiple people can collaborate on the same project?
| Alexander Perlman we are in the process of merging in AuthN using oauth2. We currently do not have any authorization but would love to get contributions Yee for more info on when oauth2 for client will be merged and docs
|
Alexander Perlman we are in the process of merging in AuthN using oauth2. We currently do not have any authorization but would love to get contributions Yee for more info on when oauth2 for client will be merged and docs
| We are finishing up the implementation of the authn components. This should be done in the next few weeks, after which we will focus on a few things: migration of users, documentation, and design for authz. Migration shouldn’t be hard so hopefully documentation will happen sooner rather than later.
in the meantime if you are playing around with it and have any questions, happy to answer
|
We are finishing up the implementation of the authn components. This should be done in the next few weeks, after which we will focus on a few things: migration of users, documentation, and design for authz. Migration shouldn’t be hard so hopefully documentation will happen sooner rather than later.
in the meantime if you are playing around with it and have any questions, happy to answer
| If I may add to the answers above, as it stands, the project/domain grouping is a logical grouping for workflows/tasks/executions. It doesn't interact with users in any way.
After the work Yee & Ketan referred to is fully merged and released, users will be able to authenticate by setting up IDP config on FlyteAdmin, you can use any ODIC-Compliant IDP to authenticate users. (see the config here: <https://github.com/lyft/flyteadmin/blob/master/pkg/auth/config/config.go> for what's needed to be filled in)
As Yee mentioned, no authorization policies can be created/enforced at this point, however this has been on our minds and is something we would like to look into, if you would like to write up a proposal/architecture as a 1-pager, we would love to collaborate on this!
|
Hi all, thank you for providing this channel. I checked out the docs but could not find this information. How does flyte handle having a large number of tasks vs. # of workers? Does it have the concept of having too much work for capacity and keeping a queue? If it does have a queue, is there some way of prioritizing work?
| Hi Matteo & Welcome to Flyte!
Excellent question, let me try to break down the different knobs we have to control that...
• FlyteAdmin (Our control plane) can create and sync different ResourceQuotas to different namespaces to limit how much resources can be used by each namespace. Our plugins understand the errors returned when Quota is hit and can handle that correctly by backing off.
• FlytePropeller (Our execution plane) uses WorkQueues provided by api machinary to queue all the new/updated workflows to process. Within our Lyft deployment we set the number of workers to 100 (I can double check), but that queue can easily be in the thousands. The real metric we look at here is the *throughput* defined as how many workflows can be processed through propeller per second. We very thoroughly look into the latency per round (as in how long did it take a single worker to go through a single workflow and attempt to make 1 update). The utopian goal for the round latency to be in milliseconds to achieve as high of a throughput as Pod Controller has for pods. There are a few tricks involved here, like offloading idempotent work to background queues/workers to free the master workers to maintain high throughput.
• *Namespace sharding for propeller;* You can deploy propeller into different namespaces and configure it to watch only those namespaces (e.g. watch only prod namespace... etc.) to completely isolate it from noisy neighbor problem.
|
Hi Matteo & Welcome to Flyte!
Excellent question, let me try to break down the different knobs we have to control that...
• FlyteAdmin (Our control plane) can create and sync different ResourceQuotas to different namespaces to limit how much resources can be used by each namespace. Our plugins understand the errors returned when Quota is hit and can handle that correctly by backing off.
• FlytePropeller (Our execution plane) uses WorkQueues provided by api machinary to queue all the new/updated workflows to process. Within our Lyft deployment we set the number of workers to 100 (I can double check), but that queue can easily be in the thousands. The real metric we look at here is the *throughput* defined as how many workflows can be processed through propeller per second. We very thoroughly look into the latency per round (as in how long did it take a single worker to go through a single workflow and attempt to make 1 update). The utopian goal for the round latency to be in milliseconds to achieve as high of a throughput as Pod Controller has for pods. There are a few tricks involved here, like offloading idempotent work to background queues/workers to free the master workers to maintain high throughput.
• *Namespace sharding for propeller;* You can deploy propeller into different namespaces and configure it to watch only those namespaces (e.g. watch only prod namespace... etc.) to completely isolate it from noisy neighbor problem.
| • Are these `ResourceQuotas` are specific to k8s? If I wanted to author a plugin to run workers outside of k8s, would it still be possible to use these or is part of the plugin implementing this? It's a bit confusing where the responsibilities of a "plugin" begins and ends in flyte
• Similar question for the `WorkQueues`, is this specific to k8s workers? Where is the actual queue stored?
|
• Are these `ResourceQuotas` are specific to k8s? If I wanted to author a plugin to run workers outside of k8s, would it still be possible to use these or is part of the plugin implementing this? It's a bit confusing where the responsibilities of a "plugin" begins and ends in flyte
• Similar question for the `WorkQueues`, is this specific to k8s workers? Where is the actual queue stored?
| Matteo Simone excellent question so we have 2 types of resource management.
1. We use K8s resource quotas to manage K8s resources.
2. For Any services that are outside of K8s, we use a centralized resource pooling system, whose interface is such - <https://github.com/lyft/flyteplugins/blob/master/go/tasks/pluginmachinery/core/resource_manager.go#L37-L40>
As a plugin writer this is available to you automaitcally configured and managed per execution. (In the back this relies on a redis DB, K8s local or a cloud hosted)
AS for the K8s resource quotas, we have observed some problems with them, we will continue to keep them, but might start using our resource manager to provide fairness.
With propeller itself, we are in the process of implementing fairQ’s (I can share the PR) if interested
This is a great question as well >>> `Similar question for the WorkQueues, is this specific to k8s workers? Where is the actual queue stored?`
The queue is only logical and stored in etcD
|
Matteo Simone excellent question so we have 2 types of resource management.
1. We use K8s resource quotas to manage K8s resources.
2. For Any services that are outside of K8s, we use a centralized resource pooling system, whose interface is such - <https://github.com/lyft/flyteplugins/blob/master/go/tasks/pluginmachinery/core/resource_manager.go#L37-L40>
As a plugin writer this is available to you automaitcally configured and managed per execution. (In the back this relies on a redis DB, K8s local or a cloud hosted)
AS for the K8s resource quotas, we have observed some problems with them, we will continue to keep them, but might start using our resource manager to provide fairness.
With propeller itself, we are in the process of implementing fairQ’s (I can share the PR) if interested
This is a great question as well >>> `Similar question for the WorkQueues, is this specific to k8s workers? Where is the actual queue stored?`
The queue is only logical and stored in etcD
| Just to expand on What Ketan said here about `WorkQueues` the queue itself (ordering... etc.) is an in-memory representation of what's stored in Etcd. if you restart propeller, you lose the ordering/retry count/in-processing status of all items from the queue, and you repopulate the raw items from Etcd. once more...
As for using resource manager in your plugin, you can do something like this:
<https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/hive/executor.go#L111-L113>
That will register the amount of resources you have available (you can choose the granularity and encode that in the namespace...)
Then all you need to do before trying to kick of an execution is this:
<https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/hive/execution_state.go#L147>
Then you need to make sure to release the resource back in Finalize(), like this:
<https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/hive/execution_state.go#L286>
We are in the process of overhauling our plugin-contrib docs and samples though.... for what it's worth
|
Just to expand on What Ketan said here about `WorkQueues` the queue itself (ordering... etc.) is an in-memory representation of what's stored in Etcd. if you restart propeller, you lose the ordering/retry count/in-processing status of all items from the queue, and you repopulate the raw items from Etcd. once more...
As for using resource manager in your plugin, you can do something like this:
<https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/hive/executor.go#L111-L113>
That will register the amount of resources you have available (you can choose the granularity and encode that in the namespace...)
Then all you need to do before trying to kick of an execution is this:
<https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/hive/execution_state.go#L147>
Then you need to make sure to release the resource back in Finalize(), like this:
<https://github.com/lyft/flyteplugins/blob/master/go/tasks/plugins/hive/execution_state.go#L286>
We are in the process of overhauling our plugin-contrib docs and samples though.... for what it's worth
| When you say `if you restart propeller, you lose the ordering/retry count/in-processing status` , do you mean that it is actually lost or that it just needs to be reloaded into memory?
|
When you say `if you restart propeller, you lose the ordering/retry count/in-processing status` , do you mean that it is actually lost or that it just needs to be reloaded into memory?
| Matteo Simone yes they are reloaded into memory, so the it starts fresh. From that point, it builds up fairness again
but progress is never lost, (as long as it was durably stored to etcD)
|
Matteo Simone yes they are reloaded into memory, so the it starts fresh. From that point, it builds up fairness again
but progress is never lost, (as long as it was durably stored to etcD)
| Does that mean that you lose the knowledge of what tasks are actually out being worked on by workers?
Oh, ok
Thank you for the help so far. Maybe it would help me if I describe the use-case that I am trying to make sure there are no blockers for because I don't have as much context on flyte to ask the right questions.
I have more "tasks" than workers and I want to queue the work into flyte and crunch on this queue until it's done. I already have an elastic compute that I can use to launch workers, so I would like to write a plugin. However, it has a finite number of workers (say, 10,000). So my ideal situation is that Flyte can take in all of the work required, my plugin can launch workers over time as it can, and ideally some sort of priority system between task types so that this large backlog does not affect some more important workloads. Also would be beneficial if it can detect failed workers (some heartbeat or just checking in on them?) and retry.
I _think_ the answer to all of this is that yes its possible, except for the prioritization. But I think you can achieve some for of the prioritization by having different worker pools for different workflows.
|
Does that mean that you lose the knowledge of what tasks are actually out being worked on by workers?
Oh, ok
Thank you for the help so far. Maybe it would help me if I describe the use-case that I am trying to make sure there are no blockers for because I don't have as much context on flyte to ask the right questions.
I have more "tasks" than workers and I want to queue the work into flyte and crunch on this queue until it's done. I already have an elastic compute that I can use to launch workers, so I would like to write a plugin. However, it has a finite number of workers (say, 10,000). So my ideal situation is that Flyte can take in all of the work required, my plugin can launch workers over time as it can, and ideally some sort of priority system between task types so that this large backlog does not affect some more important workloads. Also would be beneficial if it can detect failed workers (some heartbeat or just checking in on them?) and retry.
I _think_ the answer to all of this is that yes its possible, except for the prioritization. But I think you can achieve some for of the prioritization by having different worker pools for different workflows.
| Matteo Simone let me answer your first question `Does that mean that you lose the knowledge of what tasks are actually out being worked on by workers?` - No 99% we dont, but it is possible that you launch a task but the storage fails (etcD write fails) 2PC problem, or before we could durably write Propeller goes down (deployment, crash etc) we will lose that information. The solution for such things we prefer is that the downstream system is idempotent. We can deterministically create identifier for these every execution (and task execution) and if the system is like K8s or some of the AWS services, you can pass the same identifier along and it will be de-duped
Now for the next part
`that I can use to launch workers, so I would like to write a plugin.` can be done. Plugins for non k8s api are possible, just a little harder, we have a proposal right now to make it easier <https://github.com/lyft/flyteplugins/pull/32>
`Also would be beneficial if it can detect failed workers (some heartbeat or just checking in on them?) and retry.` Flytepropeller is essentially an event loop, so yes this is absolutely possible and this is how it detects failures :slightly_smiling_face:
`retry` thats part of the specification
`prioritization` not clear on that but would love to help
|
Matteo Simone let me answer your first question `Does that mean that you lose the knowledge of what tasks are actually out being worked on by workers?` - No 99% we dont, but it is possible that you launch a task but the storage fails (etcD write fails) 2PC problem, or before we could durably write Propeller goes down (deployment, crash etc) we will lose that information. The solution for such things we prefer is that the downstream system is idempotent. We can deterministically create identifier for these every execution (and task execution) and if the system is like K8s or some of the AWS services, you can pass the same identifier along and it will be de-duped
Now for the next part
`that I can use to launch workers, so I would like to write a plugin.` can be done. Plugins for non k8s api are possible, just a little harder, we have a proposal right now to make it easier <https://github.com/lyft/flyteplugins/pull/32>
`Also would be beneficial if it can detect failed workers (some heartbeat or just checking in on them?) and retry.` Flytepropeller is essentially an event loop, so yes this is absolutely possible and this is how it detects failures :slightly_smiling_face:
`retry` thats part of the specification
`prioritization` not clear on that but would love to help
| Regarding prioritization, not sure how much I can help at this point with my limited knowledge of flyte but I can motivate the use case. For our projects, we generally have a big backlog of work of different types. There are 2 kinds of prioritization:
1. Prioritizing among different workflow type
2. Prioritizing among same workflow type
#1 seems easy, you can increase the # of workers subscribed to certain tasks (assuming this is possible in flyte), and this means that those jobs will get done faster.
#2 is the difficult one and one that is generally not supported by most services (for us, it is also nice to have also but not a hard requirement). #2 is where you have a large backlog of work but also some of those tasks you really care about. For example, I might be evaluating a model on a huge dataset, while also evaluating a different model on a smaller dataset. If I know task 1 will take forever and is lower priority, it would be great if I can tell flyte , please perform task 2 before task 1 because it is more important for me.
I definitely need to dig into flyte further, this is a lot of info to get me started. That PR #32 sounds helpful
|
Regarding prioritization, not sure how much I can help at this point with my limited knowledge of flyte but I can motivate the use case. For our projects, we generally have a big backlog of work of different types. There are 2 kinds of prioritization:
1. Prioritizing among different workflow type
2. Prioritizing among same workflow type
#1 seems easy, you can increase the # of workers subscribed to certain tasks (assuming this is possible in flyte), and this means that those jobs will get done faster.
#2 is the difficult one and one that is generally not supported by most services (for us, it is also nice to have also but not a hard requirement). #2 is where you have a large backlog of work but also some of those tasks you really care about. For example, I might be evaluating a model on a huge dataset, while also evaluating a different model on a smaller dataset. If I know task 1 will take forever and is lower priority, it would be great if I can tell flyte , please perform task 2 before task 1 because it is more important for me.
I definitely need to dig into flyte further, this is a lot of info to get me started. That PR #32 sounds helpful
| Matteo Simone at the moment for #2, both the workflows will be treated the same way. So if you have resourcing, there is resourcing per project too. So lets say if you have 10 slots in real and you oversubscribe each tenant (or your workflow) with 6 slots (20% oversubscription). Flyte will not let one guy run over and take more than 6 slots (resource manager)
As for #1 is actually quite different. From flytes pov there are no workers for task types. They are just workers, which suffle between workflows. (its an event loop), and we are working on FairQ for this, so that one tenant does not run away with all the slots
hope this helps
also we should probably do a VC or something to discuss more in detail
|
Matteo Simone at the moment for #2, both the workflows will be treated the same way. So if you have resourcing, there is resourcing per project too. So lets say if you have 10 slots in real and you oversubscribe each tenant (or your workflow) with 6 slots (20% oversubscription). Flyte will not let one guy run over and take more than 6 slots (resource manager)
As for #1 is actually quite different. From flytes pov there are no workers for task types. They are just workers, which suffle between workflows. (its an event loop), and we are working on FairQ for this, so that one tenant does not run away with all the slots
hope this helps
also we should probably do a VC or something to discuss more in detail
| Ok, so are workers on a project level? As in, a worker must know how to execute every possible task in a project?
I just realized an additional question. Does Flyte expect workers to stick around or can they be ephemeral and die off?
|
Ok, so are workers on a project level? As in, a worker must know how to execute every possible task in a project?
I just realized an additional question. Does Flyte expect workers to stick around or can they be ephemeral and die off?
| Workers are per propeller (operator) instance, they process workflows that that instance of propeller is monitoring, a single instance can monitor all namespaces in a cluster (i.e. all projects) or only a subset of those (single project or so)...
What they do is they pickup a Workflow instance, traverse through its graph of nodes and attempt to make progress, that might mean executing a node, or might mean just updating the status of a node to succeeded/failed..
Nodes can be of different types; Branch, Workflow and TaskNode. From what I've been reading, you are interested in the TaskNode so let's talk about that one.
When a worker sees a TaskNode, it looks at the TaskTemplate referenced by that Node and finds a plugin (that has already been registered) that is capable of handling that particular task type (e.g. for a SageMaker task, we should find a SageMaker-aware plugin)
When it finds that plugin, it then passes over the task template (and a bunch of other things), and expects the plugin to launch whatever it's configured to do (e.g. a Pod or might make a service call)
Then periodically, another worker might pickup that same workflow, and keeps calling the same plugin to attempt to make further progress... until a time when the plugin will return "this task has succeeded/failed" then the worker will know it's time to move on to the next node..
You can think of the plugin as a state machine of sorts, its goal is to take a task template from a "spec" state to a "terminal" state... you can have a simple state machine that just moves from "spec" to "running" to "succeeded/failed" or a state machine of 10 states... up to your implementation...
Workers are completely managed by propeller (operator)... what you develop as a plugin developer is more or less a singleton that gets registered with the system at startup time. different workers at different times will call into your plugin to make progress to various tasks (all of the same registered type)..
|
Workers are per propeller (operator) instance, they process workflows that that instance of propeller is monitoring, a single instance can monitor all namespaces in a cluster (i.e. all projects) or only a subset of those (single project or so)...
What they do is they pickup a Workflow instance, traverse through its graph of nodes and attempt to make progress, that might mean executing a node, or might mean just updating the status of a node to succeeded/failed..
Nodes can be of different types; Branch, Workflow and TaskNode. From what I've been reading, you are interested in the TaskNode so let's talk about that one.
When a worker sees a TaskNode, it looks at the TaskTemplate referenced by that Node and finds a plugin (that has already been registered) that is capable of handling that particular task type (e.g. for a SageMaker task, we should find a SageMaker-aware plugin)
When it finds that plugin, it then passes over the task template (and a bunch of other things), and expects the plugin to launch whatever it's configured to do (e.g. a Pod or might make a service call)
Then periodically, another worker might pickup that same workflow, and keeps calling the same plugin to attempt to make further progress... until a time when the plugin will return "this task has succeeded/failed" then the worker will know it's time to move on to the next node..
You can think of the plugin as a state machine of sorts, its goal is to take a task template from a "spec" state to a "terminal" state... you can have a simple state machine that just moves from "spec" to "running" to "succeeded/failed" or a state machine of 10 states... up to your implementation...
Workers are completely managed by propeller (operator)... what you develop as a plugin developer is more or less a singleton that gets registered with the system at startup time. different workers at different times will call into your plugin to make progress to various tasks (all of the same registered type)..
| To add to what Haytham Abuelfutuh said, the plugin that you write is essentially like a stateless service which has an API that Flyte can talk to and ask for things to be done. The worker pool is outside of this and invokes a call to the the plugin when some work is to be done, the api essentially looks like this
1. Start work (context of current workflow, name, plugin specific information and inputs)
2. Has the work completed (context of current workflow, name, plugin specific information and inputs) -> yes / no with details
3. Kill the work (because an async abort was issued)
Actually for example, if you are writing a kubernetes operator to manage the work, the plugin will look like this example - <https://github.com/flyteorg/flytepluginexample>
But, for services like you are doing, we have a deeper API
we would love to help you get started with it, the mechanics might be clearer once we start implementing
|
Hello all :wave:
I created a public Flyte registry called "FlyteHub" (you can think of it like NPM or PYPI but for Flyte workflows). You can "click-to-import" public workflows and run ML without writing any code. Check it out at <https://flytehub.org>
I also have a proposal to enable FlyteHub in the Flyte sandbox. Please add a :thumbsup: on the following issue if you're inclined :slightly_smiling_face:
<https://github.com/lyft/flyte/issues/127>
| Oliver Mannion that ^ might be of interest to you as one of the differentiators. Because of how everything is defined in a standard language, building something like that registry is a straight forward concept that can carry from one environment to another...
|
Cool that makes sense, thanks! Yep, we're currently running Kubernetes on AWS, so I'll look into CloudWatch schedules. I'll also take a look at Styx. :+1:
| In case you are looking at Styx Hongxin Liang Nelson Arapé
|
In case you are looking at Styx Hongxin Liang Nelson Arapé
| We use Jsonnet to turn a config file of `(workflow_id, cron_expr)` pairs (simplification) into a series of K8s CronJobs.
With GitOps setup, updating the schedule becomes a standard PR process.
All the `CronJob`s do is retrieve the workflow spec from somewhere (if necessary) and make an API to `flyte` (if it supports API call triggering).
|
We use Jsonnet to turn a config file of `(workflow_id, cron_expr)` pairs (simplification) into a series of K8s CronJobs.
With GitOps setup, updating the schedule becomes a standard PR process.
All the `CronJob`s do is retrieve the workflow spec from somewhere (if necessary) and make an API to `flyte` (if it supports API call triggering).
| Jonathon Belotti it does support api call triggering
but it does support in-built scheduler support behind the api
|
Jonathon Belotti it does support api call triggering
but it does support in-built scheduler support behind the api
| What do you mean by that last comment?
This is true right?:
> Currently, Flyte does not have a built in cron style scheduler.
But it does have some “in-built scheduler” which is not cron?
|
What do you mean by that last comment?
This is true right?:
> Currently, Flyte does not have a built in cron style scheduler.
But it does have some “in-built scheduler” which is not cron?
| Jonathon Belotti sorry what i meant is when you create a launchplan, you can associate a cron style schedule with it
it will use AWS Cloudwatch rules to trigger the “execute” api internally
on GCP this will be done by CloudScheduler
and you are right
we could use the same API to launch a cron job (a little expensive maybe) be should work
or you could not use the schedule as part of the launchplan and just externally trigger flyte workflows
i hope that helps
|
Jonathon Belotti sorry what i meant is when you create a launchplan, you can associate a cron style schedule with it
it will use AWS Cloudwatch rules to trigger the “execute” api internally
on GCP this will be done by CloudScheduler
and you are right
we could use the same API to launch a cron job (a little expensive maybe) be should work
or you could not use the schedule as part of the launchplan and just externally trigger flyte workflows
i hope that helps
| > Launch plans simplify associating one or more schedules, inputs and notifications with your workflows.
So I am right to say that a `LaunchPlan` describes how a workflow should be launched, but doesn’t include any trigger behaviour, so CloudScheduler would be the trigger that interacts with a `LaunchPlan`?
|
> Launch plans simplify associating one or more schedules, inputs and notifications with your workflows.
So I am right to say that a `LaunchPlan` describes how a workflow should be launched, but doesn’t include any trigger behaviour, so CloudScheduler would be the trigger that interacts with a `LaunchPlan`?
| absolutely
|
absolutely
| I get it.
Agree that `CronJob` is expensive to create just to launch a workflow, but at our scale it’s Ok. At ~1000 workflow schedules a day or whatever Lyft is doing is doing it would be under-engineered.
|
I get it.
Agree that `CronJob` is expensive to create just to launch a workflow, but at our scale it’s Ok. At ~1000 workflow schedules a day or whatever Lyft is doing is doing it would be under-engineered.
| cool, we would love that contribution if you guys are upto it :slightly_smiling_face:
Jonathon Belotti
that would benefit any sandbox deployments and a lot of simple usecases
we could also write a simple controller that just triggers schedules
Katrina Rogan from my team can help you guys get started if interested
|
what do you folks think of this? <https://github.com/lyft/flyte/issues/129>
| :wave:
I dig it. I haven't used go modules yet but this seems like an good call.
One issue we might think about is developers having different golang versions.
For example, if 2 users are committing to the codebase with different golang versions, is there any chance the `go.mod` file will flap back-and-forth in format? (if each user's golang version formats the `go.mod` file differently)
^ To clarify, this issue already exists with our current dep setup, but I've thought about solving that with containerized dependency management (run dependency updates in a container with a specific go version).
|
:wave:
I dig it. I haven't used go modules yet but this seems like an good call.
One issue we might think about is developers having different golang versions.
For example, if 2 users are committing to the codebase with different golang versions, is there any chance the `go.mod` file will flap back-and-forth in format? (if each user's golang version formats the `go.mod` file differently)
^ To clarify, this issue already exists with our current dep setup, but I've thought about solving that with containerized dependency management (run dependency updates in a container with a specific go version).
| +100
I totally agree with all your posted reasons... is that something you can help us move towards? we are already using sem versions everywhere, I hope that makes the transition easier...
|
+100
I totally agree with all your posted reasons... is that something you can help us move towards? we are already using sem versions everywhere, I hope that makes the transition easier...
| I have sent a poc PR to datacatalog because it has much fewer dependencies.
It was smooth, but there are some weirdness. Eg if a dep is not sem versioned, go mod will use commit sha instead. Kinda makes sense.
|
I have sent a poc PR to datacatalog because it has much fewer dependencies.
It was smooth, but there are some weirdness. Eg if a dep is not sem versioned, go mod will use commit sha instead. Kinda makes sense.
| Andrew Chan ^ <https://github.com/lyft/datacatalog/pull/21>
yeah dep did that too (pinning to a SHA)
|
Andrew Chan ^ <https://github.com/lyft/datacatalog/pull/21>
yeah dep did that too (pinning to a SHA)
| I tried on flyteadmin locally and it was smooth too. Get some trouble with propeller though.
Mostly due to forking of k8s API and machinery.
|
I tried on flyteadmin locally and it was smooth too. Get some trouble with propeller though.
Mostly due to forking of k8s API and machinery.
| I’ll take a look at the datacatalog PR, thanks for posting that. Hongxin Liang Have you tried to make an image with it yet?
|
I’ll take a look at the datacatalog PR, thanks for posting that. Hongxin Liang Have you tried to make an image with it yet?
| Yeah it's part of the pr
I even deployed it. :)
I didn't change the boilerplate because it's a POC.
|
Yeah it's part of the pr
I even deployed it. :)
I didn't change the boilerplate because it's a POC.
| wow, awesome
|
wow, awesome
| Hongxin Liang this is awesome, thank you so much!, actually I think moving to gomodules is an important step to make the plugin system work even better
|
Hongxin Liang this is awesome, thank you so much!, actually I think moving to gomodules is an important step to make the plugin system work even better
| worked on a few more PRs. `make lint` is still having issues.
fixed most of them
or maybe all of them
|
worked on a few more PRs. `make lint` is still having issues.
fixed most of them
or maybe all of them
| This is awesome
:clap:
|
This is awesome
:clap:
| OMG can't thank you enough!!
|
OMG can't thank you enough!!
| I might have missed something. Please take a look.
|
I might have missed something. Please take a look.
| I approved and merged the flytestdlib change and released a new pflags binary... can you rerun flyteidl generate?
|
I approved and merged the flytestdlib change and released a new pflags binary... can you rerun flyteidl generate?
| Yes I will do that. Planned to do it today but got dragged into other issue.
|
Yes I will do that. Planned to do it today but got dragged into other issue.
| again, thank you a ton!
|
again, thank you a ton!
| hmm, this is not nice <https://travis-ci.org/lyft/flytestdlib/builds/621756208?utm_source=github_status&utm_medium=notification> :disappointed:
during build `go.mod`was modified
|
hmm, this is not nice <https://travis-ci.org/lyft/flytestdlib/builds/621756208?utm_source=github_status&utm_medium=notification> :disappointed:
during build `go.mod`was modified
| Hey Hongxin Liang Yee made me reconsider <https://github.com/lyft/flytestdlib/pull/51>
What's broken that you are trying to fix?
|
Hey Hongxin Liang Yee made me reconsider <https://github.com/lyft/flytestdlib/pull/51>
What's broken that you are trying to fix?
| <https://github.com/golang/go/issues/30515|https://github.com/golang/go/issues/30515>
go get modifies go.mod for tools like this which the code doesn't really depend on.
|
<https://github.com/golang/go/issues/30515|https://github.com/golang/go/issues/30515>
go get modifies go.mod for tools like this which the code doesn't really depend on.
| ahaa
yeah that I saw happen for me too... makes sense
|
Hongxin Liang could you take a look at this when you get a chance please?
<https://github.com/lyft/boilerplate/pull/3>
basically just copied your changes over
| Sure
|
Sure
| thanks!
|
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 36