Execution of a Workflow in a Loop using SDK does not create Data Tables as intended

Hi OD-Experts,
I have created a Workflow that should create a DataTable from a source table that filters data in a query. The filter variable is handed over as a workflow variable. I have used this workflow in a production line, and in a second workflow that triggers the workflow using the python SDK. Also, the production line is triggered using the python SDK in a third workflow. Whenever I use the workflow triggering the production line in a loop, the tables are written as expected.
It is different when I’m directly executing the workflow using the SDK. It also doesn’t matter if I was using execute_async or execute_sync. When I check the job_shallow object, it states that the job was finished successfully. Also the workflow shows, that the execution was successful. No tables are written. Also, when I look at the job history of the triggered workflow, no error is shown.
I have tried different approaches:

  • Singular runs without a loop results in no data table written.
  • Running in a loop without a sleep command results in no data table written.
  • Running in a loop with forced sleeps results in no data table written.

Am I missing something? I also attach the snippet, may be I’m missing something in the code.

for index, row in dataset.iterrows():
    print("Execute WF synchronuously")
    
    # Generate List containing the variable assignments for the PL
    wf_id = '8f4c30f0-8851-4c17-a23f-1fd3fb9e35ea'
    wf_proc = one_data_api.workflows.execute_sync(
        id=wf_id, 
        variable_assignments=[{"variableType": "string", "variableName": "EXTRACTOR_NAME", "variableValue": row["EXTRACTOR_NAME"]}])
    
    time.sleep(50)
    
    # intentionally wait for the process to finish writing.
    wf_job_shallow = one_data_api.workflows.jobs.get_latest_job_shallow(wf_id=wf_id, job_execution_state=JobExecutionState.SUCCESS.value)
    
    print("Workflow Shallow Job")
    print(wf_job_shallow.start_time)
    print(wf_job_shallow.end_time)
    print(wf_job_shallow.execution_state)

Have you any suggestions on how to overcome this issue?

A short call revealed that the Data Table Save processor inside the slave Workflow creates the new data tables in another project than the slave Workflow is located in. In the end, the tables are in a project in which neither the trigger WF nor the slave WF are in. So I assume it is a bug.