-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
test: add initial type instantiation benchmarks #6681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
@ssalbdivad is attempting to deploy a commit to the trpc Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThis pull request introduces several configuration and dependency adjustments along with new benchmarking scripts and test files. The changes include adding a new ignoring rule in Changes
Sequence Diagram(s)sequenceDiagram
participant Runner as Benchmark Runner
participant Tester as Benchmark Script
participant Router as TRPC Router
participant Query as Query Procedure
Runner->>Tester: Invoke benchmark script
Tester->>Router: Initialize router & procedures
Router->>Query: Execute test case (e.g., getUser)
Query-->>Router: Return response & metrics
Router-->>Tester: Collate performance data
Tester-->>Runner: Report benchmark results
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (1)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
packages/tests/types/client.bench.ts (1)
89-113: Inconsistent benchmark namingThere seems to be a mismatch between the benchmark name and implementation. The benchmark is named "sequential Zod type" in line 95, but the implementation in lines 96-102 creates a
nonSequentialRouter. This might cause confusion when interpreting the benchmark results.-bench('sequential Zod type', async () => { +bench('non-sequential Zod type', async () => { const nonSequentialRouter = t.router({ query1: t.procedure.input(base).query(({ input }) => `hello ${input.a}`), mutation1: t.procedure .input(base) .mutation(({ input }) => `hello ${input.a}`), }); await hooks.mutation1.mutate({ a: '', b: '', c: '' }); await hooks.query1.query({ a: '', b: '', c: '' }); }).types([698, 'instantiations']);packages/tests/types/tanstack.bench.ts (1)
45-87: Consider adding actual query executionThe Tanstack benchmark creates the components but doesn't actually execute the queries or mutations. To get a more complete performance picture, consider triggering the query and mutation in the benchmark.
function UserList() { const trpc = useTRPC(); const userQuery = useQuery(trpc.getUser.queryOptions({ id: 'id_bilbo' })); const userCreator = useMutation(trpc.createUser.mutationOptions()); const onClick = () => userCreator.mutate({ name: 'Frodo' }); + + // Execute the query and mutation to measure complete performance + userQuery.refetch(); + onClick(); } // this is still extremely reasonable, but is noticably higher // than some of the cases in client.bench.ts and routes.bench.tspackages/tests/types/sequential.bench.ts (3)
28-43: Missing query execution in non-sequential Zod benchmarkUnlike other benchmarks, this one doesn't execute any queries after creating the router, which might affect the completeness of the performance measurement.
bench('non-sequential Zod type', async () => { const nonSequentialRouter = t.router({ query1: t.procedure.input(base).query(({ input }) => `hello ${input.a}`), mutation1: t.procedure .input(base) .mutation(({ input }) => `hello ${input.a}`), }); + // Execute the query to ensure complete type instantiation measurement + await nonSequentialRouter.query1.call({ a: 'test', b: 'test', c: 'test' }); // this is relatively cheap }).types([1659, 'instantiations']);
45-53: Consider reusing the base for sequential typesSince the purpose is to compare non-sequential vs sequential types, it would be more controlled to reuse the same base object rather than creating a new one. This would ensure the comparison focuses solely on the sequential operations.
// even though sequential is totally equivalent to nonSequential, its // Zod representation is not reduced and still includes intermediate operations -const sequential = base +const sequential = base // Directly using the same 'base' as non-sequential test .partial() .merge(base) .pick({ a: true, b: true, c: true }) .omit({}) .merge(base);
109-112: Missing return value capture in isolated sequential ArkType benchmarkUnlike the Zod equivalent in lines 60-68, this benchmark doesn't capture the return value of the sequential operations, which could affect TypeScript's optimization.
bench('isolated sequential arktype', () => { - arkBase2.partial().merge(arkBase2).pick('d', 'e', 'f').omit().merge(arkBase2); + const result = arkBase2.partial().merge(arkBase2).pick('d', 'e', 'f').omit().merge(arkBase2); // these kind of operations are much cheaper in ArkType than Zod }).types([2336, 'instantiations']);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (10)
.gitignore(1 hunks).vscode/settings.json(0 hunks)package.json(2 hunks)packages/tests/package.json(2 hunks)packages/tests/tsconfig.json(1 hunks)packages/tests/types/client.bench.ts(1 hunks)packages/tests/types/routes.bench.ts(1 hunks)packages/tests/types/sequential.bench.ts(1 hunks)packages/tests/types/tanstack.bench.ts(1 hunks)www/package.json(1 hunks)
💤 Files with no reviewable changes (1)
- .vscode/settings.json
🔇 Additional comments (26)
.gitignore (1)
35-36: LGTM: New attest cache directory added to .gitignoreThe
.attest/directory is correctly added to.gitignoreto prevent caching artifacts from the newly added@ark/attestbenchmarking library from being committed to the repository.package.json (2)
22-22: LGTM: New type-stats script addedAdded a new script for running type statistics using the newly introduced attest tooling. This script will help analyze the benchmark results.
47-47: LGTM: @ark/attest dependency addedThe
@ark/attestlibrary (version ^0.44.7) is correctly added as a dependency to support the new type benchmarking capabilities introduced in this PR.www/package.json (1)
87-87: LGTM: Updated arktype dependencyUpdated to arktype v2.1.9 from v2.0.1, which should support the benchmark infrastructure being added in this PR.
packages/tests/tsconfig.json (1)
3-3: LGTM: Added types directory to tsconfig includesThe
typesdirectory is now included in the TypeScript compilation, which is necessary for the new benchmark files.packages/tests/types/routes.bench.ts (8)
1-15: LGTM: Well-structured benchmark setup with clear documentationThe imports and initial setup are well-organized, and the comments clearly explain the purpose of the baseline benchmark for accounting for initialization costs.
16-30: LGTM: Effective baseline benchmark implementationThe baseline benchmark is properly implemented to measure the initialization overhead, which will help isolate the performance impact of the actual router configuration being tested.
32-47: LGTM: Clear explanation of preinferred schemasThe comments explain the rationale behind using preinferred schemas outside the benchmark function to isolate specific type checking costs, which is a good approach for accurate measurements.
52-70: LGTM: Well-implemented simple routes benchmark with preinferred schemasThe benchmark for 5 simple routes with preinferred schemas is properly implemented and includes the instantiation count tag for analysis.
72-155: LGTM: Extended routes benchmark with linear scaling analysisThe benchmark for 25 routes tests scaling behavior, and the comments about linear scaling provide useful insights for performance evaluation. The clear documentation about potential caching effects is valuable.
157-197: LGTM: Zod validation benchmark implementationThe benchmark for routes with direct Zod validation is well-implemented, and the comment about half the inference overhead coming from Zod provides useful performance context.
199-382: LGTM: Extended Zod validation benchmark with isolation explanationThe larger benchmark with 25 routes using Zod validation is well-implemented. The comments about reusing property names between benches being acceptable due to automatic isolation are particularly helpful for future benchmark authors.
384-478: LGTM: Properly declared preinferred schema typesAll the preinferred schema type declarations are correctly structured and follow a consistent pattern, ensuring they're evaluated before the benchmarks run.
packages/tests/types/client.bench.ts (5)
1-29: The baseline benchmark setup looks goodThe baseline benchmark establishes a good reference point by creating a simple router with a single procedure and initializing both the standard and React TRPC clients. This provides a solid foundation for comparing performance across different benchmarks.
31-71: Good approach for router5 benchmarkMoving the router definition to the module level helps isolate the performance of client operations from router definition overhead. The instantiation count comment provides valuable context about the efficiency of TRPC's types in this scenario.
Also applies to: 73-87
115-131: Good demonstration of sequential Zod type overheadThis benchmark effectively demonstrates the increased type inference costs when using sequential Zod operations. The instantiation count of 16545 compared to 698 for non-sequential types provides a clear illustration of the performance implications.
133-147: Valuable insight on React Query overheadThe benchmark shows that the overhead of using TRPC React hooks is minimal (1358 vs 1318 instantiations), which is useful information for developers choosing between client implementations.
149-326: Linear scaling performance is well-demonstratedThe router25 benchmark effectively shows that instantiation costs scale linearly with the number of procedures (5038 for 25 routes vs 1318 for 5 routes). This is important information for applications with many endpoints.
Also applies to: 328-359
packages/tests/types/tanstack.bench.ts (2)
1-33: Well-structured baseline for Tanstack benchmarkThe baseline benchmark properly sets up the TRPC context with Tanstack Query, including the necessary components for accurate measurement. Good job establishing a clean baseline.
35-43: Good reference to documentation in commentsThe comment on line 35 referencing the TRPC documentation is helpful for maintainers who might want to understand or update the benchmark in the future. The router setup follows the documentation example.
packages/tests/types/sequential.bench.ts (3)
1-27: Good baseline with both Zod and ArkTypeIncluding both Zod and ArkType validation in the baseline ensures a fair comparison between the two libraries in subsequent benchmarks. Well-structured initialization.
54-69: Valuable isolated benchmark for sequential ZodThis benchmark isolates the cost of sequential Zod operations without TRPC, which provides important context for understanding the performance characteristics of complex type definitions.
70-84: Excellent comparison between Zod and ArkTypeThe parallel benchmarks for Zod and ArkType provide valuable insights into the relative performance of these libraries for sequential type operations. The stark difference in instantiation counts (49173 vs 17906) is particularly noteworthy.
Also applies to: 86-133
packages/tests/package.json (3)
8-12: Well-organized benchmark scriptsThe benchmark scripts are well-structured with a main script that runs all benchmarks and individual scripts for each benchmark file. Using the
--experimental-strip-typesflag is appropriate for type benchmarking.
22-35: Version pinning for Tanstack dependenciesThe Tanstack dependencies are correctly pinned to specific versions, which is important for reproducible benchmarks. Using workspace dependencies for TRPC packages ensures testing against the current codebase.
35-35:✅ Verification successful
Ensure ArkType version compatibility
The ArkType version has been updated to 2.1.9. Verify that this version is compatible with the benchmark code, especially the sequential operations being tested.
🏁 Script executed:
#!/bin/bash # Check if the ArkType version used in benchmarks is compatible with the API being tested # Look for any ArkType-related deprecation warnings or errors in the project grep -r "arktype" --include="*.log" . # Check for version compatibility notes in the ArkType documentation npm view arktype@2.1.9 readmeLength of output: 6267
ArkType 2.1.9 Compatibility Verified
After running the verification scripts:
- No ArkType-related deprecation warnings or errors were found in the project logs.
- The npm readme for ArkType 2.1.9 confirms the library’s intended usage with no notes suggesting incompatibility in sequential benchmarks.
Please ensure that benchmark tests covering sequential operations are run to further validate runtime behavior. Overall, the updated dependency in
packages/tests/package.jsonappears compatible.
# Conflicts: # packages/tests/package.json # pnpm-lock.yaml
🎯 Changes
Add an initial set of type benchmarks using
@ark/attestwith detailed explanations so they can be expanded and integrated with CI in the future.By default, the bench threshold for a change is +20%, otherwise running the bench will return a non-zero exit code.
I was initially hoping to get some version of this TypeScript PR merged to address the sequential case demonstrated in
sequential.bench.ts.Since it may take a bit of additional time for the team to determine which solution they'd like to go with, I figured it was time to submit these now so tRPC can start benefiting from them.
Note, the data used to arrive at my broader conclusions can be found here:
https://github.com/ssalbdivad/trpc-tsperf
✅ Checklist
Summary by CodeRabbit
Chores
New Features
Tests