Part 1 is definitely possible. In our current project, we implemented this by using Outlook Webhooks, which can be triggered using a Python processor and then send nicely formatted messages to a mail address or Outlook group. An exemplary Python code for this would look like:
The Python processor with webhook can sit directly in each workflow you want to monitor. Alternatively (and answering your last question), you can also put it into a “monitoring workflow” that collects the status of a list of workflows via the ONE DATA API.
Would you say this is a feasible approach to use in workflows that are not constantly used?
In our case an analyst would set up a workflow run it once and use the results, then he would start a new task and therefore build a new workflow. This means he or she would have to include the script in all workflows. Would you say this is still useful or does it rather create overhead?
The outcome we would expect to get is only “worfklow finished” and status “sucessful” or “failed”. If we have a monitoring workflow that collects this information from the API it would have to be run by a scheduler every few seconds, but the difficulty would be to stop the scheduler once the workflow actually finished and the notification was sent. Is there a way to do so?
The current assumption would be that the integration of notifications is very useful for workflows that run on a scheduled regular base but rather not for “adhoc notifications”.
If it is too much overhead really depends on how the user works with ONE DATA in general. Setting up a webhook for yourself and configuring the Python processor should not take more than 10 minutes. The user then can simply copy this processor to any other workflow (not just for the current project) which he/she wants to monitor. I think this would be worth the effort if the user regularly builds and runs workflows that take more than half an hour to run, but I guess this is personal preference.
Concerning the scheduler: to my knowledge there is no feature to stop a scheduler based on events (like: workflow xy finished), but maybe there are tricks possible using the API?
One more remark: To my knowledge, the way I suggested is only able to produce a notification in case the workflow finished successfully. There will be no notification sent if the workflow fails before running the Python processor.
I think Matthias’ proposal to go via outlook webhooks is a very applicable solution for your use case.
You could set up an API call that collects information on all jobs created during the last x minutes.
In the python script you configure a list like 'workflow name: … state: … ’ that will be passed on in the notification.
The scheduler then can run on a regular basis (e.g. every 30 mins). And only send a notification, if there was any activity since the last run.
This still is a quite generic approach. Does it make sense to send the status of all workflows to the same group of recipients? Or do we have to split it up: Each user only gets the workflows they built/ran? The latter could be realized by a master data table that maps users to their own outlook group and then maps this information to the workflow owner/initiator (would have to be done in the python script as well).
What you should consider in my opinion are 2 points
Is it really beneficial to ‘spam’ the user with workflow notifications?
Can your instance spare the extra resources for a (more or less) high frequent scheduler or will it cause performance issues?