Attached the function that saves a python model and furthermore shares it to a certain project. For this a second API call is necessary, because otherwise the model is only visible for the owner, if you turn on the toggle ‘without project’ (see screenshot).
Mandatory inputs for this function - which initially creates a model object - are the ‘target’ project_id (where the model is shared to) and a model name.
import pickle
import requests
import json
def handle(req):
"""Handles a request to the function.
Args:
req (dict): Request body containing arguments (req["args"]) and configuration (req["config"]).
Returns:
dict|str: The return value of the function. Can be a dictionary, string or None.
"""
# global
model_name = 'my_model'
target_project_id = 'ad3edecd-acad-43fb-b672-76d53047eb4f'
x_auth_token = req["config"]["authorization"]
api_endpoint = req["config"]["onedataBaseurl"]+ "/api/v1/models"
api_endpoint_share = req["config"]["onedataBaseurl"] + '/api/v1/projects/' + target_project_id + '/resources'
header = {'Authorization' : x_auth_token}
header_share = {'Accept': "application/x-www-form-urlencoded",
'Content-Type': 'application/json',
'Authorization': x_auth_token
}
### Prepare POST to save python model initially
# create pickle of your model
model_content = {'hello': 'Success!!!'}
model_pk = pickle.dumps(model_content)
# create mandatory meta info
meta = json.dumps({'type': "PYTHON_MODEL",
'name': model_name,
'pythonModelType': "PythonBinary"
})
payload = {'meta': meta}
# prepare files format (array of tuples)
files = [('file', ('file', model_pk))]
# generate new python model
response = requests.post(url=api_endpoint, headers=header, data=payload, files=files)
###
### Prepare POST to share python model to target project
# retrieve model information needed for sharing
model_owner = response.json()['owner']
model_id = response.json()['id']
model_name = response.json()['name']
model_type = response.json()['resourceType']
body = {'owner': model_owner,
'id': model_id,
'name': model_name,
'resourceType': model_type
}
# share new python model to project
response_share = requests.post(url=api_endpoint_share, headers=header_share, data=json.dumps(body))
return {'StatusCode': response_share.status_code}
In addition, what is perhaps also quite helpful in this topic. A function with which you can update any python model.
Mandatory inputs for this function - which creates a new version of an existing model object - is the model_id and the model name. If one leaves the model_id empty a new model object (same name, different model_id) is created.
import pickle
import requests
import json
def handle(req):
"""Handles a request to the function.
Args:
req (dict): Request body containing arguments (req["args"]) and configuration (req["config"]).
Returns:
dict|str: The return value of the function. Can be a dictionary, string or None.
"""
# global
model_id = 'cc5b61d5-fc56-47ab-a55d-4fcf95049c2a'
model_name = 'my_model'
x_auth_token = req["config"]["authorization"]
api_endpoint_update = req["config"]["onedataBaseurl"]+ "/api/v1/models/" + model_id
header = {'Authorization': x_auth_token}
# create new pickle
model_new_pk = pickle.dumps({'hello2': 'Success2'})
files_new = [('file', ('file', model_new_pk))]
# create mandatory meta info
meta = json.dumps({'type': "PYTHON_MODEL",
'name': model_name,
'pythonModelType': "PythonBinary"
})
payload = {'meta': meta}
# update python model
response_update = requests.post(url=api_endpoint_update, headers=header, data=payload, files=files_new)
return {'StatusCode': response_update.status_code}
The documentation for this endpoint (models__id__post) is unfortunately not mentioning that three meta information parts (type, name and pythonModelType) are needed.