Describe the bug
When generating semantic memory using our default profile prompt, the example written for LLM is inconsistent with response_format we defined when using OpenAI sdk.
Steps to reproduce
In system prompt, the example we give the LLM is a map,
{
"0": {
"command": "add",
"tag": "Psychological Profile",
"feature": "work_superior_frustration",
"value": "User is frustrated with their boss for perceived incompetence"
},
"1": {
"command": "add",
"tag": "Demographic Information",
"feature": "summer_job",
"value": "User is working a temporary job for the summer"
}
}
However when calling sdk, the format we specify is (and this format is the actually expected one):

which is a list looks like:
[{
"command": "add",
"tag": "Psychological Profile",
"feature": "work_superior_frustration",
"value": "User is frustrated with their boss for perceived incompetence"
},
{
"command": "add",
"tag": "Demographic Information",
"feature": "summer_job",
"value": "User is working a temporary job for the summer"
}
]
I am not sure if this inconstancy will actually cause any problem, but it is kind of confusing.
Expected behavior
Make the prompt consistent with the actual required format(the array/list instead of map).
Environment
Memmachine Version: b04db65
Additional context
No response