I’m running python client to read inferred relations from a remote azure Grakn server. However, I’m experiencing odd performance with read_transaction
My python app does the following steps:
with session.transaction().read() as read_transaction: answer_interator = read_transaction.query(query, infer=True) for answer in answer_interator: action = answer.map().get("action") action_type = action_type.type().label()
My python.client can query results (answer_interator) from Grakn within few million-seconds but has unstable performance with this For Loop. It can even take up to a minute to loop this with only one object in answer_interator. (usually need few seconds, but still very slow.)
This read_transactionn can return max 3 inferred relations and the search space is pretty small. I have only experienced this slow performance with inferred data and general data was pretty fast (million seconds)*.
I’m running python 3.7 and Grakn-client 1.7.1 locally with i7-8550 CPU and 16G RAM.
Our Grakn server is running on 2 vCPUs, 8G RAM, 16G Temp storage Virtual Machine.
Is this a computational capacity issue or something is wrong? Are there any solution I can apply in order to improve the performance?
Thanks in advance