Synchronous trigger of FaaS Endpoints

What happens if two users trigger a FaaS endpoint synchronously in my App? Will it queue up? Will it be processed in parallel? Does it matter, what the FaaS does exactly (e.g., read / write operations to Posgres)?

As a user, can I manipulate this behaviour somehow? If seen the async property for endpoints in the Apps docu but it is marked as legacy. Is there a better option available right now?

Hey Kai,

as I found out by some experiments, it seems that up to 5 function executions (of the same function) can (and will) run in parallel. Once the amount of concurrent executions goes beyond this, a queue is filled. I have not managed to find out when this queue is “full” and will reject additional requests (but a good few dozens of execution requests seemed to not cause any problems).
This behavior seems to stem from the underlying technologies used and - given your function is stateless (!) - there’s probably not much to worry about when it comes to parallel function executions (except for maybe the additional resource usage - most certainly, all parllel executions share the same pod CPU/Memory limits!).
If you have global variables in your code, you may face unexpected behavior. This should be avoided anyways, though.
Please note, that this behavior description stems from my observations only. Your mileage may vary. :wink:

1 Like

On the “will it cause trouble downstream to things like OD and postgres” question:
We’ve witnessed exceptions for workflow job creation when a job was started twice at the exact same time (due to a uniqueness constraint of the composed key of the underlying entity). This in practice only happens when starting the jobs programmatically (and from the same origin - i.e. exactly at the same time and not only “close to each other”). User-interaction-based metadata changes (like job creation, updates of DT metadata, etc) usually happen in normal daily use of OD at any given point. Therefore, I doubt that proxying these interactions by a function will change things drastically as long as the origin is two individual user interactions.

Thanks for your insights so far.
With Postgres I meant the data storage not the meta db.
Our use case is that the FaaS function should only add new data to a table, if it is not already existing. As you said, we observed that FaaS will executed in parallel. We can prevent users to trigger the funcation multiple times again, but it seems we cannot prevent two independent users to trigger the same function at the almost same time. As the FaaS execution time vary heavily on our instance, the window where problems might occur is sometimes quite big…