Skip to content

Conversation

@ssalbdivad
Copy link

@ssalbdivad ssalbdivad commented Apr 6, 2025

🎯 Changes

Add an initial set of type benchmarks using @ark/attest with detailed explanations so they can be expanded and integrated with CI in the future.

By default, the bench threshold for a change is +20%, otherwise running the bench will return a non-zero exit code.

I was initially hoping to get some version of this TypeScript PR merged to address the sequential case demonstrated in sequential.bench.ts.

Since it may take a bit of additional time for the team to determine which solution they'd like to go with, I figured it was time to submit these now so tRPC can start benefiting from them.

Note, the data used to arrive at my broader conclusions can be found here:

https://github.com/ssalbdivad/trpc-tsperf

✅ Checklist

  • I have followed the steps listed in the Contributing guide.
  • If necessary, I have added documentation related to the changes made.
  • I have added or updated the tests related to the changes made.

Summary by CodeRabbit

  • Chores

    • Improved repository and workspace settings for clearer file management.
    • Upgraded external dependencies to enhance overall stability and performance.
  • New Features

    • Introduced new command-line scripts to support type statistics and benchmarking.
  • Tests

    • Expanded the benchmarking suite to assess client and service performance across various scenarios, including TRPC routers, type validation libraries, and React Query integration.

@ssalbdivad ssalbdivad requested a review from a team as a code owner April 6, 2025 23:38
@vercel
Copy link

vercel bot commented Apr 6, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
www ❌ Failed (Inspect) May 2, 2025 10:25am

@vercel
Copy link

vercel bot commented Apr 6, 2025

@ssalbdivad is attempting to deploy a commit to the trpc Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Apr 6, 2025

Walkthrough

This pull request introduces several configuration and dependency adjustments along with new benchmarking scripts and test files. The changes include adding a new ignoring rule in .gitignore and removing a VS Code setting, updating package scripts and dependencies, and expanding test capabilities by adding multiple benchmark test files. Additionally, the TypeScript configuration for tests is expanded to include new paths, and dependency versions for arktype are updated in multiple locations.

Changes

File(s) Change Summary
.gitignore Added new entry: .attest/
.vscode/settings.json Removed setting: "explorer.excludeGitIgnore": true
package.json, www/package.json In package.json: Added script "type-stats": "attest stats" and dependency "@ark/attest": "^0.44.7". In www/package.json: Updated arktype dependency version from "^2.0.1" to "^2.1.9".
packages/tests/package.json Added multiple benchmark scripts (type-bench, type-bench-routes, type-bench-client, type-bench-sequential, type-bench-tanstack); updated dependencies including switching @trpc/client and @trpc/server to workspace:* and adding several new testing libraries and React Query related packages, plus updating arktype version.
packages/tests/tsconfig.json Modified "include" array to add "types" alongside existing paths
packages/tests/types/* (client.bench.ts, routes.bench.ts, sequential.bench.ts, tanstack.bench.ts) Introduced new benchmark test files:
- client.bench.ts: Benchmarks for TRPC client and routers with various procedure counts.
- routes.bench.ts: Benchmarks for router instantiation using simple and expanded routes with schema validations.
- sequential.bench.ts: Performance tests comparing sequential vs non-sequential type validations (Zod and ArkType).
- tanstack.bench.ts: Benchmarks integrating TRPC with Tanstack React Query.

Sequence Diagram(s)

sequenceDiagram
    participant Runner as Benchmark Runner
    participant Tester as Benchmark Script
    participant Router as TRPC Router
    participant Query as Query Procedure

    Runner->>Tester: Invoke benchmark script
    Tester->>Router: Initialize router & procedures
    Router->>Query: Execute test case (e.g., getUser)
    Query-->>Router: Return response & metrics
    Router-->>Tester: Collate performance data
    Tester-->>Runner: Report benchmark results
Loading

Suggested labels

♻️ autoupdate

Suggested reviewers

  • juliusmarminge

Poem

I'm a bunny in the code field,
Hopping through benchmarks with zeal,
New scripts and tests line my burrow,
Dependencies sorted without a furrow,
With each change, I dance in delight 🐇,
Celebrating updates from morning to night!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e54ce92 and 02ac2b0.

📒 Files selected for processing (1)
  • packages/tests/package.json (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/tests/package.json
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Analyze (typescript)

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@ssalbdivad ssalbdivad changed the title teat: add initial type instantiation benchmarks test: add initial type instantiation benchmarks Apr 6, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
packages/tests/types/client.bench.ts (1)

89-113: Inconsistent benchmark naming

There seems to be a mismatch between the benchmark name and implementation. The benchmark is named "sequential Zod type" in line 95, but the implementation in lines 96-102 creates a nonSequentialRouter. This might cause confusion when interpreting the benchmark results.

-bench('sequential Zod type', async () => {
+bench('non-sequential Zod type', async () => {
  const nonSequentialRouter = t.router({
    query1: t.procedure.input(base).query(({ input }) => `hello ${input.a}`),
    mutation1: t.procedure
      .input(base)
      .mutation(({ input }) => `hello ${input.a}`),
  });

  await hooks.mutation1.mutate({ a: '', b: '', c: '' });
  await hooks.query1.query({ a: '', b: '', c: '' });
}).types([698, 'instantiations']);
packages/tests/types/tanstack.bench.ts (1)

45-87: Consider adding actual query execution

The Tanstack benchmark creates the components but doesn't actually execute the queries or mutations. To get a more complete performance picture, consider triggering the query and mutation in the benchmark.

  function UserList() {
    const trpc = useTRPC();
    const userQuery = useQuery(trpc.getUser.queryOptions({ id: 'id_bilbo' }));
    const userCreator = useMutation(trpc.createUser.mutationOptions());
    const onClick = () => userCreator.mutate({ name: 'Frodo' });
+   
+   // Execute the query and mutation to measure complete performance
+   userQuery.refetch();
+   onClick();
  }
  // this is still extremely reasonable, but is noticably higher
  // than some of the cases in client.bench.ts and routes.bench.ts
packages/tests/types/sequential.bench.ts (3)

28-43: Missing query execution in non-sequential Zod benchmark

Unlike other benchmarks, this one doesn't execute any queries after creating the router, which might affect the completeness of the performance measurement.

bench('non-sequential Zod type', async () => {
  const nonSequentialRouter = t.router({
    query1: t.procedure.input(base).query(({ input }) => `hello ${input.a}`),
    mutation1: t.procedure
      .input(base)
      .mutation(({ input }) => `hello ${input.a}`),
  });

+  // Execute the query to ensure complete type instantiation measurement
+  await nonSequentialRouter.query1.call({ a: 'test', b: 'test', c: 'test' });

  // this is relatively cheap
}).types([1659, 'instantiations']);

45-53: Consider reusing the base for sequential types

Since the purpose is to compare non-sequential vs sequential types, it would be more controlled to reuse the same base object rather than creating a new one. This would ensure the comparison focuses solely on the sequential operations.

// even though sequential is totally equivalent to nonSequential, its
// Zod representation is not reduced and still includes intermediate operations
-const sequential = base
+const sequential = base  // Directly using the same 'base' as non-sequential test
  .partial()
  .merge(base)
  .pick({ a: true, b: true, c: true })
  .omit({})
  .merge(base);

109-112: Missing return value capture in isolated sequential ArkType benchmark

Unlike the Zod equivalent in lines 60-68, this benchmark doesn't capture the return value of the sequential operations, which could affect TypeScript's optimization.

bench('isolated sequential arktype', () => {
-  arkBase2.partial().merge(arkBase2).pick('d', 'e', 'f').omit().merge(arkBase2);
+  const result = arkBase2.partial().merge(arkBase2).pick('d', 'e', 'f').omit().merge(arkBase2);
  // these kind of operations are much cheaper in ArkType than Zod
}).types([2336, 'instantiations']);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b774f34 and 6b6147e.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (10)
  • .gitignore (1 hunks)
  • .vscode/settings.json (0 hunks)
  • package.json (2 hunks)
  • packages/tests/package.json (2 hunks)
  • packages/tests/tsconfig.json (1 hunks)
  • packages/tests/types/client.bench.ts (1 hunks)
  • packages/tests/types/routes.bench.ts (1 hunks)
  • packages/tests/types/sequential.bench.ts (1 hunks)
  • packages/tests/types/tanstack.bench.ts (1 hunks)
  • www/package.json (1 hunks)
💤 Files with no reviewable changes (1)
  • .vscode/settings.json
🔇 Additional comments (26)
.gitignore (1)

35-36: LGTM: New attest cache directory added to .gitignore

The .attest/ directory is correctly added to .gitignore to prevent caching artifacts from the newly added @ark/attest benchmarking library from being committed to the repository.

package.json (2)

22-22: LGTM: New type-stats script added

Added a new script for running type statistics using the newly introduced attest tooling. This script will help analyze the benchmark results.


47-47: LGTM: @ark/attest dependency added

The @ark/attest library (version ^0.44.7) is correctly added as a dependency to support the new type benchmarking capabilities introduced in this PR.

www/package.json (1)

87-87: LGTM: Updated arktype dependency

Updated to arktype v2.1.9 from v2.0.1, which should support the benchmark infrastructure being added in this PR.

packages/tests/tsconfig.json (1)

3-3: LGTM: Added types directory to tsconfig includes

The types directory is now included in the TypeScript compilation, which is necessary for the new benchmark files.

packages/tests/types/routes.bench.ts (8)

1-15: LGTM: Well-structured benchmark setup with clear documentation

The imports and initial setup are well-organized, and the comments clearly explain the purpose of the baseline benchmark for accounting for initialization costs.


16-30: LGTM: Effective baseline benchmark implementation

The baseline benchmark is properly implemented to measure the initialization overhead, which will help isolate the performance impact of the actual router configuration being tested.


32-47: LGTM: Clear explanation of preinferred schemas

The comments explain the rationale behind using preinferred schemas outside the benchmark function to isolate specific type checking costs, which is a good approach for accurate measurements.


52-70: LGTM: Well-implemented simple routes benchmark with preinferred schemas

The benchmark for 5 simple routes with preinferred schemas is properly implemented and includes the instantiation count tag for analysis.


72-155: LGTM: Extended routes benchmark with linear scaling analysis

The benchmark for 25 routes tests scaling behavior, and the comments about linear scaling provide useful insights for performance evaluation. The clear documentation about potential caching effects is valuable.


157-197: LGTM: Zod validation benchmark implementation

The benchmark for routes with direct Zod validation is well-implemented, and the comment about half the inference overhead coming from Zod provides useful performance context.


199-382: LGTM: Extended Zod validation benchmark with isolation explanation

The larger benchmark with 25 routes using Zod validation is well-implemented. The comments about reusing property names between benches being acceptable due to automatic isolation are particularly helpful for future benchmark authors.


384-478: LGTM: Properly declared preinferred schema types

All the preinferred schema type declarations are correctly structured and follow a consistent pattern, ensuring they're evaluated before the benchmarks run.

packages/tests/types/client.bench.ts (5)

1-29: The baseline benchmark setup looks good

The baseline benchmark establishes a good reference point by creating a simple router with a single procedure and initializing both the standard and React TRPC clients. This provides a solid foundation for comparing performance across different benchmarks.


31-71: Good approach for router5 benchmark

Moving the router definition to the module level helps isolate the performance of client operations from router definition overhead. The instantiation count comment provides valuable context about the efficiency of TRPC's types in this scenario.

Also applies to: 73-87


115-131: Good demonstration of sequential Zod type overhead

This benchmark effectively demonstrates the increased type inference costs when using sequential Zod operations. The instantiation count of 16545 compared to 698 for non-sequential types provides a clear illustration of the performance implications.


133-147: Valuable insight on React Query overhead

The benchmark shows that the overhead of using TRPC React hooks is minimal (1358 vs 1318 instantiations), which is useful information for developers choosing between client implementations.


149-326: Linear scaling performance is well-demonstrated

The router25 benchmark effectively shows that instantiation costs scale linearly with the number of procedures (5038 for 25 routes vs 1318 for 5 routes). This is important information for applications with many endpoints.

Also applies to: 328-359

packages/tests/types/tanstack.bench.ts (2)

1-33: Well-structured baseline for Tanstack benchmark

The baseline benchmark properly sets up the TRPC context with Tanstack Query, including the necessary components for accurate measurement. Good job establishing a clean baseline.


35-43: Good reference to documentation in comments

The comment on line 35 referencing the TRPC documentation is helpful for maintainers who might want to understand or update the benchmark in the future. The router setup follows the documentation example.

packages/tests/types/sequential.bench.ts (3)

1-27: Good baseline with both Zod and ArkType

Including both Zod and ArkType validation in the baseline ensures a fair comparison between the two libraries in subsequent benchmarks. Well-structured initialization.


54-69: Valuable isolated benchmark for sequential Zod

This benchmark isolates the cost of sequential Zod operations without TRPC, which provides important context for understanding the performance characteristics of complex type definitions.


70-84: Excellent comparison between Zod and ArkType

The parallel benchmarks for Zod and ArkType provide valuable insights into the relative performance of these libraries for sequential type operations. The stark difference in instantiation counts (49173 vs 17906) is particularly noteworthy.

Also applies to: 86-133

packages/tests/package.json (3)

8-12: Well-organized benchmark scripts

The benchmark scripts are well-structured with a main script that runs all benchmarks and individual scripts for each benchmark file. Using the --experimental-strip-types flag is appropriate for type benchmarking.


22-35: Version pinning for Tanstack dependencies

The Tanstack dependencies are correctly pinned to specific versions, which is important for reproducible benchmarks. Using workspace dependencies for TRPC packages ensures testing against the current codebase.


35-35:

✅ Verification successful

Ensure ArkType version compatibility

The ArkType version has been updated to 2.1.9. Verify that this version is compatible with the benchmark code, especially the sequential operations being tested.


🏁 Script executed:

#!/bin/bash
# Check if the ArkType version used in benchmarks is compatible with the API being tested

# Look for any ArkType-related deprecation warnings or errors in the project
grep -r "arktype" --include="*.log" .

# Check for version compatibility notes in the ArkType documentation
npm view arktype@2.1.9 readme

Length of output: 6267


ArkType 2.1.9 Compatibility Verified

After running the verification scripts:

  • No ArkType-related deprecation warnings or errors were found in the project logs.
  • The npm readme for ArkType 2.1.9 confirms the library’s intended usage with no notes suggesting incompatibility in sequential benchmarks.

Please ensure that benchmark tests covering sequential operations are run to further validate runtime behavior. Overall, the updated dependency in packages/tests/package.json appears compatible.

# Conflicts:
#	packages/tests/package.json
#	pnpm-lock.yaml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants