Skip to content

[Bug]: Semantic memory: format defined in profile_prompt is inconsistent with output_format defined in generate_parsed_response #770

@szou-mv

Description

@szou-mv

Describe the bug

When generating semantic memory using our default profile prompt, the example written for LLM is inconsistent with response_format we defined when using OpenAI sdk.

Steps to reproduce

In system prompt, the example we give the LLM is a map,

{
    "0": {
        "command": "add",
        "tag": "Psychological Profile",
        "feature": "work_superior_frustration",
        "value": "User is frustrated with their boss for perceived incompetence"
    },
    "1": {
        "command": "add",
        "tag": "Demographic Information",
        "feature": "summer_job",
        "value": "User is working a temporary job for the summer"
    }
}

However when calling sdk, the format we specify is (and this format is the actually expected one):
Image
which is a list looks like:

[{
        "command": "add",
        "tag": "Psychological Profile",
        "feature": "work_superior_frustration",
        "value": "User is frustrated with their boss for perceived incompetence"
    },
    {
        "command": "add",
        "tag": "Demographic Information",
        "feature": "summer_job",
        "value": "User is working a temporary job for the summer"
    }
]

I am not sure if this inconstancy will actually cause any problem, but it is kind of confusing.

Expected behavior

Make the prompt consistent with the actual required format(the array/list instead of map).

Environment

Memmachine Version: b04db65

Additional context

No response

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions