Skip to content

Conversation

@MarekZbysinskiQC
Copy link

RootModel should not have the field extra set in pydantic:

https://docs.pydantic.dev/latest/errors/usage_errors/#root-model-extra

RootModel does not seem to support future annotations, so quotes must be used:

pydantic/pydantic#7967

@MarekZbysinskiQC MarekZbysinskiQC marked this pull request as draft August 14, 2025 12:37
@codspeed-hq
Copy link

codspeed-hq bot commented Aug 14, 2025

CodSpeed Performance Report

Merging #2478 will not alter performance

Comparing MarekZbysinskiQC:root-model-pydanticv2-fix (e662dd3) with main (e9ffb9f)

Summary

✅ 32 untouched benchmarks

@benomahonyTW
Copy link

Any update on this? I think it's still an issue

@ilovelinux
Copy link
Contributor

This PR has no reproducible example. It's going to be hard to validate it.

A previous version of datamodel-code-generator has issues with same-name objects (see #2460). It has been fixed in v0.33.0 by #2461

@benomahonyTW could you check if v0.33.0 fixed your issue? If not, could you open an issue with a reproducible example?

Thanks!

@benomahonyTW
Copy link

benomahonyTW commented Oct 1, 2025

@ilovelinux Yeah verified this doesn't work with v0.33.0. Works perfectly when I pass --output-model-type pydantic_v2.BaseModel so probably a detection issue or the default should be pydantic v2?

Screenshot 2025-10-01 at 13 24 51

Looking at my environment is pydantic_ai interfering with the pydantic version detection????

Command run: datamodel-codegen --url https://raw.githubusercontent.com/bitol-io/open-data-contract-standard/refs/heads/main/schema/odcs-json-schema-latest.json --output latest_open_datacontract.py --input-file-type=jsonschema

#   filename:  https://raw.githubusercontent.com/bitol-io/open-data-contract-standard/refs/heads/main/schema/odcs-json-schema-latest.json
#   timestamp: 2025-10-01T17:10:03+00:00

from __future__ import annotations

from datetime import date, datetime
from enum import Enum
from typing import Any

from pydantic import BaseModel, Extra, Field


class Kind(Enum):
    DataContract = "DataContract"


class ApiVersion(Enum):
    v3_0_2 = "v3.0.2"
    v3_0_1 = "v3.0.1"
    v3_0_0 = "v3.0.0"
    v2_2_2 = "v2.2.2"
    v2_2_1 = "v2.2.1"
    v2_2_0 = "v2.2.0"


class Type(Enum):
    api = "api"
    athena = "athena"
    azure = "azure"
    bigquery = "bigquery"
    clickhouse = "clickhouse"
    databricks = "databricks"
    denodo = "denodo"
    dremio = "dremio"
    duckdb = "duckdb"
    glue = "glue"
    cloudsql = "cloudsql"
    db2 = "db2"
    informix = "informix"
    kafka = "kafka"
    kinesis = "kinesis"
    local = "local"
    mysql = "mysql"
    oracle = "oracle"
    postgresql = "postgresql"
    postgres = "postgres"
    presto = "presto"
    pubsub = "pubsub"
    redshift = "redshift"
    s3 = "s3"
    sftp = "sftp"
    snowflake = "snowflake"
    sqlserver = "sqlserver"
    synapse = "synapse"
    trino = "trino"
    vertica = "vertica"
    custom = "custom"


class ServerSource(BaseModel):
    __root__: Any


class LogicalType(Enum):
    object = "object"


class LogicalType1(Enum):
    string = "string"
    date = "date"
    number = "number"
    integer = "integer"
    object = "object"
    array = "array"
    boolean = "boolean"


class SchemaProperty(BaseModel):
    pass


class SchemaItemProperty(BaseModel):
    properties: list[SchemaProperty] | None = Field(
        None, description="A list of properties for the object."
    )


class Tags(BaseModel):
    __root__: list[str] = Field(
        ...,
        description="A list of tags that may be assigned to the elements (object or property); the tags keyword may appear at any level. Tags may be used to better categorize an element. For example, `finance`, `sensitive`, `employee_record`.",
        examples=["finance", "sensitive", "employee_record"],
    )


class Dimension(Enum):
    accuracy = "accuracy"
    completeness = "completeness"
    conformity = "conformity"
    consistency = "consistency"
    coverage = "coverage"
    timeliness = "timeliness"
    uniqueness = "uniqueness"


class Type1(Enum):
    text = "text"
    library = "library"
    sql = "sql"
    custom = "custom"


class DataQualityLibrary(BaseModel):
    rule: str = Field(
        ...,
        description="Define a data quality check based on the predefined rules as per ODCS.",
        examples=["duplicateCount", "validValues", "rowCount"],
    )
    mustBe: Any | None = Field(
        None,
        description="Must be equal to the value to be valid. When using numbers, it is equivalent to '='.",
    )
    mustNotBe: Any | None = Field(
        None,
        description="Must not be equal to the value to be valid. When using numbers, it is equivalent to '!='.",
    )
    mustBeGreaterThan: float | None = Field(
        None,
        description="Must be greater than the value to be valid. It is equivalent to '>'.",
    )
    mustBeGreaterOrEqualTo: float | None = Field(
        None,
        description="Must be greater than or equal to the value to be valid. It is equivalent to '>='.",
    )
    mustBeLessThan: float | None = Field(
        None,
        description="Must be less than the value to be valid. It is equivalent to '<'.",
    )
    mustBeLessOrEqualTo: float | None = Field(
        None,
        description="Must be less than or equal to the value to be valid. It is equivalent to '<='.",
    )
    mustBeBetween: list[float] | None = Field(
        None,
        description="Must be between the two numbers to be valid. Smallest number first in the array.",
        max_items=2,
        min_items=2,
        unique_items=True,
    )
    mustNotBeBetween: list[float] | None = Field(
        None,
        description="Must not be between the two numbers to be valid. Smallest number first in the array.",
        max_items=2,
        min_items=2,
        unique_items=True,
    )


class DataQualitySql(BaseModel):
    query: str = Field(
        ...,
        description="Query string that adheres to the dialect of the provided server.",
        examples=["SELECT COUNT(*) FROM ${table} WHERE ${column} IS NOT NULL"],
    )


class DataQualityCustom(BaseModel):
    engine: str = Field(
        ...,
        description="Name of the engine which executes the data quality checks.",
        examples=["soda", "great-expectations", "monte-carlo", "dbt"],
    )
    implementation: str | dict[str, Any]


class AuthoritativeDefinition(BaseModel):
    url: str = Field(..., description="URL to the authority.")
    type: str = Field(
        ...,
        description="Type of definition for authority: v2.3 adds standard values: `businessDefinition`, `transformationImplementation`, `videoTutorial`, `tutorial`, and `implementation`.",
        examples=[
            "businessDefinition",
            "transformationImplementation",
            "videoTutorial",
            "tutorial",
            "implementation",
        ],
    )


class AuthoritativeDefinitions(BaseModel):
    __root__: list[AuthoritativeDefinition] = Field(
        ...,
        description="List of links to sources that provide more details on the dataset; examples would be a link to an external definition, a training video, a git repo, data catalog, or another tool. Authoritative definitions follow the same structure in the standard.",
    )


class SupportItem(BaseModel):
    channel: str = Field(..., description="Channel name or identifier.")
    url: str = Field(
        ...,
        description="Access URL using normal [URL scheme](https://en.wikipedia.org/wiki/URL#Syntax) (https, mailto, etc.).",
    )
    description: str | None = Field(
        None, description="Description of the channel, free text."
    )
    tool: str | None = Field(
        None,
        description="Name of the tool, value can be `email`, `slack`, `teams`, `discord`, `ticket`, or `other`.",
        examples=["email", "slack", "teams", "discord", "ticket", "other"],
    )
    scope: str | None = Field(
        None,
        description="Scope can be: `interactive`, `announcements`, `issues`.",
        examples=["interactive", "announcements", "issues"],
    )
    invitationUrl: str | None = Field(
        None,
        description="Some tools uses invitation URL for requesting or subscribing. Follows the [URL scheme](https://en.wikipedia.org/wiki/URL#Syntax).",
    )


class Pricing(BaseModel):
    priceAmount: float | None = Field(
        None, description="Subscription price per unit of measure in `priceUnit`."
    )
    priceCurrency: str | None = Field(
        None, description="Currency of the subscription price in `price.priceAmount`."
    )
    priceUnit: str | None = Field(
        None,
        description="The unit of measure for calculating cost. Examples megabyte, gigabyte.",
    )


class Team(BaseModel):
    username: str | None = Field(
        None,
        description="The user's username or email.",
        examples=["mail@example.com", "uid12345678"],
    )
    name: str | None = Field(
        None, description="The user's name.", examples=["Jane Doe"]
    )
    description: str | None = Field(None, description="The user's description.")
    role: str | None = Field(
        None,
        description="The user's job role; Examples might be owner, data steward. There is no limit on the role.",
    )
    dateIn: date | None = Field(
        None, description="The date when the user joined the team."
    )
    dateOut: date | None = Field(
        None, description="The date when the user ceased to be part of the team."
    )
    replacedByUsername: str | None = Field(
        None, description="The username of the user who replaced the previous user."
    )


class AnyType(BaseModel):
    __root__: str | float | int | bool | list | dict[str, Any] | None


class AnyNonCollectionType(BaseModel):
    __root__: str | float | int | bool | None


class Support(BaseModel):
    __root__: list[SupportItem] = Field(
        ..., description="Top level for support channels."
    )


class ServiceLevelAgreementProperty(BaseModel):
    property: str = Field(
        ...,
        description="Specific property in SLA, check the periodic table. May requires units (more details to come).",
    )
    value: str | float | int | bool | None = Field(
        ...,
        description="Agreement value. The label will change based on the property itself.",
    )
    valueExt: AnyNonCollectionType | None = Field(
        None,
        description="Extended agreement value. The label will change based on the property itself.",
    )
    unit: str | None = Field(
        None,
        description="**d**, day, days for days; **y**, yr, years for years, etc. Units use the ISO standard.",
    )
    element: str | None = Field(
        None,
        description="Element(s) to check on. Multiple elements should be extremely rare and, if so, separated by commas.",
    )
    driver: str | None = Field(
        None,
        description="Describes the importance of the SLA from the list of: `regulatory`, `analytics`, or `operational`.",
        examples=["regulatory", "analytics", "operational"],
    )


class CustomProperty(BaseModel):
    property: str | None = Field(
        None,
        description="The name of the key. Names should be in camel case–the same as if they were permanent properties in the contract.",
    )
    value: AnyType | None = Field(None, description="The value of the key.")


class DataQuality(BaseModel):
    authoritativeDefinitions: AuthoritativeDefinitions | None = None
    businessImpact: str | None = Field(
        None,
        description="Consequences of the rule failure.",
        examples=["operational", "regulatory"],
    )
    customProperties: list[CustomProperty] | None = Field(
        None, description="Additional properties required for rule execution."
    )
    description: str | None = Field(
        None, description="Describe the quality check to be completed."
    )
    dimension: Dimension | None = Field(
        None,
        description="The key performance indicator (KPI) or dimension for data quality.",
    )
    method: str | None = Field(None, examples=["reconciliation"])
    name: str | None = Field(None, description="Name of the data quality check.")
    schedule: str | None = Field(
        None, description="Rule execution schedule details.", examples=["0 20 * * *"]
    )
    scheduler: str | None = Field(
        None,
        description="The name or type of scheduler used to start the data quality check.",
        examples=["cron"],
    )
    severity: str | None = Field(
        None,
        description="The severance of the quality rule.",
        examples=["info", "warning", "error"],
    )
    tags: Tags | None = None
    type: Type1 | None = Field(
        "library",
        description="The type of quality check. 'text' is human-readable text that describes the quality of the data. 'library' is a set of maintained predefined quality attributes such as row count or unique. 'sql' is an individual SQL query that returns a value that can be compared. 'custom' is quality attributes that are vendor-specific, such as Soda or Great Expectations.",
    )
    unit: str | None = Field(
        None,
        description="Unit the rule is using, popular values are `rows` or `percent`, but any value is allowed.",
        examples=["rows", "percent"],
    )


class DataQualityChecks(BaseModel):
    __root__: list[DataQuality] = Field(
        ...,
        description="Data quality rules with all the relevant information for rule setup and execution.",
    )


class CustomProperties(BaseModel):
    __root__: list[CustomProperty] = Field(
        ..., description="A list of key/value pairs for custom properties."
    )


class Description(BaseModel):
    usage: str | None = Field(None, description="Intended usage of the dataset.")
    purpose: str | None = Field(None, description="Purpose of the dataset.")
    limitations: str | None = Field(None, description="Limitations of the dataset.")
    authoritativeDefinitions: AuthoritativeDefinitions | None = None
    customProperties: CustomProperties | None = None


class SchemaElement(BaseModel):
    name: str | None = Field(None, description="Name of the element.")
    physicalType: str | None = Field(
        None,
        description="The physical element data type in the data source.",
        examples=["table", "view", "topic", "file"],
    )
    description: str | None = Field(None, description="Description of the element.")
    businessName: str | None = Field(
        None, description="The business name of the element."
    )
    authoritativeDefinitions: AuthoritativeDefinitions | None = None
    tags: Tags | None = None
    customProperties: CustomProperties | None = None


class SchemaObject(SchemaElement):
    logicalType: LogicalType | None = Field(
        None, description="The logical element data type."
    )
    physicalName: str | None = Field(
        None, description="Physical name.", examples=["table_1_2_0"]
    )
    dataGranularityDescription: str | None = Field(
        None,
        description="Granular level of the data in the object.",
        examples=["Aggregation by country"],
    )
    properties: list[SchemaProperty] | None = Field(
        None, description="A list of properties for the object."
    )
    quality: DataQualityChecks | None = None
    name: str = Field(..., description="Name of the element.")


class SchemaBaseProperty(SchemaElement):
    primaryKey: bool | None = Field(
        None,
        description="Boolean value specifying whether the element is primary or not. Default is false.",
    )
    primaryKeyPosition: int | None = Field(
        -1,
        description="If element is a primary key, the position of the primary key element. Starts from 1. Example of `account_id, name` being primary key columns, `account_id` has primaryKeyPosition 1 and `name` primaryKeyPosition 2. Default to -1.",
    )
    logicalType: LogicalType1 | None = Field(
        None, description="The logical element data type."
    )
    logicalTypeOptions: dict[str, Any] | None = Field(
        None, description="Additional optional metadata to describe the logical type."
    )
    physicalType: str | None = Field(
        None,
        description="The physical element data type in the data source. For example, VARCHAR(2), DOUBLE, INT.",
    )
    physicalName: str | None = Field(
        None, description="Physical name.", examples=["col_str_a"]
    )
    required: bool | None = Field(
        False,
        description="Indicates if the element may contain Null values; possible values are true and false. Default is false.",
    )
    unique: bool | None = Field(
        False,
        description="Indicates if the element contains unique values; possible values are true and false. Default is false.",
    )
    partitioned: bool | None = Field(
        False,
        description="Indicates if the element is partitioned; possible values are true and false.",
    )
    partitionKeyPosition: int | None = Field(
        -1,
        description="If element is used for partitioning, the position of the partition element. Starts from 1. Example of `country, year` being partition columns, `country` has partitionKeyPosition 1 and `year` partitionKeyPosition 2. Default to -1.",
    )
    classification: str | None = Field(
        None,
        description="Can be anything, like confidential, restricted, and public to more advanced categorization. Some companies like PayPal, use data classification indicating the class of data in the element; expected values are 1, 2, 3, 4, or 5.",
        examples=["confidential", "restricted", "public"],
    )
    encryptedName: str | None = Field(
        None,
        description="The element name within the dataset that contains the encrypted element value. For example, unencrypted element `email_address` might have an encryptedName of `email_address_encrypt`.",
    )
    transformSourceObjects: list[str] | None = Field(
        None,
        description="List of objects in the data source used in the transformation.",
    )
    transformLogic: str | None = Field(
        None, description="Logic used in the element transformation."
    )
    transformDescription: str | None = Field(
        None, description="Describes the transform logic in very simple terms."
    )
    examples: list[AnyType] | None = Field(
        None, description="List of sample element values."
    )
    criticalDataElement: bool | None = Field(
        False,
        description="True or false indicator; If element is considered a critical data element (CDE) then true else false.",
    )
    quality: DataQualityChecks | None = None


class Role(BaseModel):
    role: str = Field(
        ..., description="Name of the IAM role that provides access to the dataset."
    )
    description: str | None = Field(
        None, description="Description of the IAM role and its permissions."
    )
    access: str | None = Field(
        None, description="The type of access provided by the IAM role."
    )
    firstLevelApprovers: str | None = Field(
        None, description="The name(s) of the first-level approver(s) of the role."
    )
    secondLevelApprovers: str | None = Field(
        None, description="The name(s) of the second-level approver(s) of the role."
    )
    customProperties: CustomProperties | None = None


class Server(BaseModel):
    server: str = Field(..., description="Identifier of the server.")
    type: Type = Field(..., description="Type of the server.")
    description: str | None = Field(None, description="Description of the server.")
    environment: str | None = Field(
        None,
        description="Environment of the server.",
        examples=["prod", "preprod", "dev", "uat"],
    )
    roles: list[Role] | None = Field(
        None, description="List of roles that have access to the server."
    )
    customProperties: CustomProperties | None = None


class OpenDataContractStandardOdcs(BaseModel):
    class Config:
        extra = Extra.forbid

    version: str = Field(..., description="Current version of the data contract.")
    kind: Kind = Field(
        ..., description="The kind of file this is. Valid value is `DataContract`."
    )
    apiVersion: ApiVersion = Field(
        ...,
        description="Version of the standard used to build data contract. Default value is v3.0.2.",
    )
    id: str = Field(
        ...,
        description="A unique identifier used to reduce the risk of dataset name collisions, such as a UUID.",
    )
    name: str | None = Field(None, description="Name of the data contract.")
    tenant: str | None = Field(
        None,
        description="Indicates the property the data is primarily associated with. Value is case insensitive.",
    )
    tags: Tags | None = None
    status: str = Field(
        ...,
        description="Current status of the dataset.",
        examples=["proposed", "draft", "active", "deprecated", "retired"],
    )
    servers: list[Server] | None = Field(
        None, description="List of servers where the datasets reside."
    )
    dataProduct: str | None = Field(None, description="The name of the data product.")
    description: Description | None = Field(
        None, description="High level description of the dataset."
    )
    domain: str | None = Field(
        None,
        description="Name of the logical data domain.",
        examples=[
            "imdb_ds_aggregate",
            "receiver_profile_out",
            "transaction_profile_out",
        ],
    )
    schema_: list[SchemaObject] | None = Field(
        None,
        alias="schema",
        description="A list of elements within the schema to be cataloged.",
    )
    support: Support | None = None
    price: Pricing | None = None
    team: list[Team] | None = None
    roles: list[Role] | None = Field(
        None,
        description="A list of roles that will provide user access to the dataset.",
    )
    slaDefaultElement: str | None = Field(
        None,
        description="Element (using the element path notation) to do the checks on.",
    )
    slaProperties: list[ServiceLevelAgreementProperty] | None = Field(
        None,
        description="A list of key/value pairs for SLA specific properties. There is no limit on the type of properties (more details to come).",
    )
    authoritativeDefinitions: AuthoritativeDefinitions | None = None
    customProperties: CustomProperties | None = None
    contractCreatedTs: datetime | None = Field(
        None, description="Timestamp in UTC of when the data contract was created."
    )


from pydantic_ai import Agent

agent = Agent(
    model="google-gla:gemini-2.5-pro", output_type=OpenDataContractStandardOdcs
)
prompt = """
# Seller Domain Data Product Documentation

## Overview

This documentation covers the "my quantum" data product within the seller domain. We're currently on version 1.1.0 and the product is active. The unique identifier for this contract is 53581432-6c55-4ba2-a65f-72344a91553a.

## What This Data Is About

The purpose of this data product is to provide views built on top of the seller tables. The main limitation is that this data is based entirely on the seller perspective - we don't have any buyer information included. The primary usage is for predicting sales over time.

For privacy and compliance, you can find our privacy statement at <https://example.com/gdpr.pdf>. This data product belongs to ClimateQuantumInc tenant.

## Technical Infrastructure

Our data lives on a PostgreSQL server called "my-postgres" running on localhost port 5432. The database name is "pypl-edw" and we're working in the "pp_access_views" schema.

## The Core Table

We have one main table called "tbl" (physical name is "tbl_1") which we call "Core Payment Metrics" in business terms. This table provides core payment metrics and you can find more information about it at <https://catalog.data.gov/dataset/air-quality> for business definitions and there's a video tutorial at <https://youtu.be/jbY1BKFj9ec>. We tag this table with 'finance' and 'payments'.

The data is aggregated on transaction reference date (txn_ref_dt) and payment transaction ID (pmt_txn_id).

### Column Details

**Transaction Reference Date**

- Physical column name: txn_ref_dt
- This is the reference date for each transaction
- It's a date field and it's partitioned (first partition key)
- Not a primary key and not required
- Classification level is public
- Example values: "2022-10-03", "2020-01-28"
- The transformation logic pulls from three source tables (table_name_1, table_name_2, table_name_3) using: sel t1.txn_dt as txn_ref_dt from table_name_1 as t1, table_name_2 as t2, table_name_3 as t3 where t1.txn_dt=date-3
- In business terms, this defines the logic for dummies
- No anonymization strategy is applied

**Receiver ID**

- Physical column name: rcvr_id
- This is the primary key (position 1)
- It's a string field, specifically varchar(18)
- Not required but tagged with 'uid'
- Classification level is restricted
- Business name is "receiver id"

**Receiver Country Code**  

- Physical column name: rcvr_cntry_code
- This is the country code, varchar(2) format
- Not a primary key, not partitioned, not required
- Classification level is public
- Business name is "receiver country code"
- Has multiple authoritative definitions:
  - Business definition at <https://collibra.com/asset/742b358f-71a5-4ab1-bda4-dcdba9418c25>
  - Implementation details at <https://github.com/myorg/myrepo>
  - Database implementation at jdbc:postgresql://localhost:5432/adventureworks/tbl_1/rcvr_cntry_code
- When encrypted, it's called "rcvr_cntry_code_encrypted"

## Data Quality Rules

For the country code column, we have a null check rule that runs daily at 8:20 PM (cron: 0 20 ** *). This rule ensures the column doesn't contain null values. It's a completeness check with error severity and operational business impact.

For the entire table, we have a count check that also runs daily at 8:20 PM. This ensures the row count stays within expected volume ranges. It's a completeness check using reconciliation method with error severity and operational business impact.

The business key for this table consists of transaction reference date and receiver ID.

## Pricing

We charge $9.95 USD per megabyte for this data product.

## Team Members

**Current Team:**

- mhopper: Data Scientist (started October 1, 2022, replaced ceastwood)
- daustin: Owner and "Keeper of the grail" (started October 1, 2022)

**Former Team:**

- ceastwood: Data Scientist (August 2, 2022 to October 1, 2022)

## Access Roles and Permissions

We have several access roles defined:

**Read Access Roles:**

- microstrategy_user_opr: Requires Reporting Manager approval, then mandolorian approval
- bq_queryman_user_opr: Only needs Reporting Manager approval
- risk_data_access_opr: Requires Reporting Manager approval, then dathvador approval

**Write Access:**

- bq_unica_user_opr: Requires Reporting Manager approval, then mickey approval

## Service Level Agreements

The default SLA element is tab1.txn_ref_dt (transaction reference date).

**Key SLA Properties:**

- Latency: 4 days from the transaction reference date
- General availability started: May 12, 2022 at 9:30:10 AM Pacific
- End of support: May 12, 2032 at 9:30:10 AM Pacific  
- End of life: May 12, 2042 at 9:30:10 AM Pacific
- Data retention: 3 years based on transaction reference date
- Update frequency: Daily (every 1 day)
- Availability times:
  - For regulatory purposes: 9:00 AM to 8:00 PM Pacific
  - For analytics: 8:00 AM to 8:00 PM Pacific

## Support Channels

If you need help with this data product:

- Slack channel: #product-help (<https://aidaug.slack.com/archives/C05UZRSBKLY>)
- Email distribution list: <datacontract-ann@bitol.io>
- General product feedback: <https://product-feedback.com>

## Additional Information

This data product is tagged with "transactions" for easy discovery.

**Custom Properties:**

- Reference ruleset name: gcsc.ruleset.name
- Some additional property: property.value
- Dataproc cluster name: [cluster name]

This contract was originally created on November 15, 2022 at 2:59:43 UTC.

The data contract follows version 3.0.2 of the standard.
"""
if __name__ == "__main__":
    result = agent.run_sync(prompt)
    print(result.output)

@ilovelinux
Copy link
Contributor

@benomahonyTW there's no Pydantic version detection in datamodel-code-generator. See #2466

@benomahonyTW
Copy link

benomahonyTW commented Oct 1, 2025

Wouldn't it make sense to keep v2 as default and drop v1 support at some point? I feel this is probably the expected behaviour from all new users?

@ilovelinux
Copy link
Contributor

@benomahonyTW we are discussing that in issue #2466. Please let's continue there so we don't pollute this MR. We are off-topic 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants