Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve inference for context sensitive functions in object and array literal arguments #48538

Merged
merged 3 commits into from
Apr 5, 2022

Conversation

ahejlsberg
Copy link
Member

@ahejlsberg ahejlsberg commented Apr 3, 2022

When contextually typing parameters of arrow functions, function expressions, and object literal methods in generic function argument lists, we infer types from context insensitive function arguments anywhere in the argument list and from context sensitive function arguments in preceding positions in the argument list. For example:

declare function f1<T>(produce: (n: number) => T, consume: (x: T) => void): void;
declare function f2<T>(consume: (x: T) => void, produce: (n: number) => T): void;

f1(() => 42, x => x.toFixed());
f1((n: number) => n, x => x.toFixed());
f1(n => n, x => x.toFixed());
f1(function() { return 42 }, x => x.toFixed());

f2(x => x.toFixed(), () => 42);
f2(x => x.toFixed(), (n: number) => n);
f2(x => x.toFixed(), n => n);  // Error, x of type unknown
f2(x => x.toFixed(), function() { return 42 });  // Error, x of type unknown

Above, () => 42 and (n: number) => n are context insensitive because they have no contextually typed parameters, but n => n and function() { return 42 } are context sensitive because they have at least one contextually typed parameter (in the function expression case, the implicit this parameter is contextually typed). The errors in the last two calls occur because inferred type information only flows from left to right between context sensitive arguments. This is a long standing limitation of our inference algorithm, and one that isn't likely to change.

However, in this PR we are removing another long standing, and arguably more confusing, limitation. So far it hasn't been possible to flow inferences between context sensitive functions in object and array literals:

declare function f3<T>(arg: { produce: (n: number) => T, consume: (x: T) => void }): void;

f3({ produce: () => 42, consume: x => x.toFixed() });
f3({ produce: (n: number) => n, consume: x => x.toFixed() });
f3({ produce: n => n, consume: x => x.toFixed() });  // Was error, now ok
f3({ produce: function() { return 42 }, consume: x => x.toFixed() });  // Was error, now ok
f3({ produce() { return 42 }, consume: x => x.toFixed() });  // Was error, now ok

With this PR, all of the above calls to f3 successfully check where previously only the first two would. Effectively, we now have the same left-to-right rules for information flow between context sensitive contextually typed functions regardless of whether the functions occur as discrete arguments or properties in object or array literals within the same argument.

Fixes #25092.
Fixes #38623.
Fixes #38845.
Fixes #38872.
Fixes #41712.
Fixes #47599.
Fixes #48279.
Fixes #48466.

@typescript-bot typescript-bot added For Backlog Bug PRs that fix a backlog bug For Milestone Bug PRs that fix a bug with a specific milestone labels Apr 3, 2022
@ahejlsberg
Copy link
Member Author

@typescript-bot test this
@typescript-bot user test this inline
@typescript-bot run dt
@typescript-bot perf test faster

@typescript-bot
Copy link
Collaborator

typescript-bot commented Apr 3, 2022

Heya @ahejlsberg, I've started to run the parallelized Definitely Typed test suite on this PR at 900ef91. You can monitor the build here.

@typescript-bot
Copy link
Collaborator

typescript-bot commented Apr 3, 2022

Heya @ahejlsberg, I've started to run the abridged perf test suite on this PR at 900ef91. You can monitor the build here.

Update: The results are in!

@typescript-bot
Copy link
Collaborator

typescript-bot commented Apr 3, 2022

Heya @ahejlsberg, I've started to run the diff-based community code test suite on this PR at 900ef91. You can monitor the build here.

Update: The results are in!

@typescript-bot
Copy link
Collaborator

typescript-bot commented Apr 3, 2022

Heya @ahejlsberg, I've started to run the extended test suite on this PR at 900ef91. You can monitor the build here.

@typescript-bot
Copy link
Collaborator

@ahejlsberg
The results of the perf run you requested are in!

Here they are:

Comparison Report - main..48538

Metric main 48538 Delta Best Worst
Angular - node (v14.15.1, x64)
Memory used 332,780k (± 0.01%) 332,797k (± 0.01%) +17k (+ 0.01%) 332,732k 332,864k
Parse Time 2.02s (± 0.62%) 2.03s (± 0.54%) +0.02s (+ 0.84%) 2.01s 2.06s
Bind Time 0.87s (± 0.46%) 0.87s (± 0.71%) -0.00s (- 0.23%) 0.86s 0.89s
Check Time 5.63s (± 0.51%) 5.65s (± 0.40%) +0.01s (+ 0.25%) 5.61s 5.69s
Emit Time 6.34s (± 0.89%) 6.30s (± 0.69%) -0.03s (- 0.55%) 6.17s 6.38s
Total Time 14.85s (± 0.57%) 14.85s (± 0.39%) -0.01s (- 0.04%) 14.67s 14.95s
Compiler-Unions - node (v14.15.1, x64)
Memory used 193,949k (± 0.64%) 194,004k (± 0.60%) +55k (+ 0.03%) 192,020k 195,321k
Parse Time 0.86s (± 0.52%) 0.86s (± 0.65%) 0.00s ( 0.00%) 0.85s 0.87s
Bind Time 0.56s (± 0.53%) 0.56s (± 1.10%) +0.00s (+ 0.54%) 0.55s 0.58s
Check Time 7.54s (± 0.65%) 7.50s (± 0.54%) -0.05s (- 0.62%) 7.42s 7.60s
Emit Time 2.50s (± 0.62%) 2.49s (± 0.59%) -0.01s (- 0.52%) 2.47s 2.53s
Total Time 11.46s (± 0.40%) 11.41s (± 0.40%) -0.05s (- 0.47%) 11.33s 11.52s
Monaco - node (v14.15.1, x64)
Memory used 325,409k (± 0.00%) 325,413k (± 0.01%) +4k (+ 0.00%) 325,384k 325,487k
Parse Time 1.57s (± 0.41%) 1.57s (± 0.60%) +0.00s (+ 0.13%) 1.55s 1.59s
Bind Time 0.77s (± 0.29%) 0.77s (± 0.38%) +0.00s (+ 0.13%) 0.77s 0.78s
Check Time 5.54s (± 0.54%) 5.55s (± 0.46%) +0.01s (+ 0.20%) 5.49s 5.60s
Emit Time 3.31s (± 0.62%) 3.31s (± 0.47%) +0.00s (+ 0.03%) 3.27s 3.34s
Total Time 11.19s (± 0.34%) 11.20s (± 0.31%) +0.02s (+ 0.14%) 11.11s 11.26s
TFS - node (v14.15.1, x64)
Memory used 288,909k (± 0.01%) 288,908k (± 0.01%) -1k (- 0.00%) 288,835k 288,998k
Parse Time 1.32s (± 1.40%) 1.32s (± 1.31%) +0.01s (+ 0.46%) 1.30s 1.37s
Bind Time 0.73s (± 1.44%) 0.73s (± 0.88%) -0.00s (- 0.14%) 0.72s 0.75s
Check Time 5.15s (± 0.57%) 5.20s (± 0.58%) +0.05s (+ 0.95%) 5.13s 5.26s
Emit Time 3.47s (± 1.77%) 3.53s (± 2.35%) +0.06s (+ 1.70%) 3.36s 3.67s
Total Time 10.67s (± 0.81%) 10.79s (± 1.10%) +0.12s (+ 1.12%) 10.52s 11.00s
material-ui - node (v14.15.1, x64)
Memory used 447,283k (± 0.01%) 447,176k (± 0.04%) -107k (- 0.02%) 446,521k 447,270k
Parse Time 1.86s (± 0.76%) 1.87s (± 0.56%) +0.00s (+ 0.16%) 1.85s 1.90s
Bind Time 0.70s (± 0.80%) 0.70s (± 0.88%) +0.00s (+ 0.43%) 0.69s 0.71s
Check Time 13.03s (± 0.49%) 13.00s (± 0.66%) -0.03s (- 0.21%) 12.81s 13.23s
Emit Time 0.00s (± 0.00%) 0.00s (± 0.00%) 0.00s ( NaN%) 0.00s 0.00s
Total Time 15.59s (± 0.41%) 15.57s (± 0.56%) -0.02s (- 0.14%) 15.37s 15.81s
xstate - node (v14.15.1, x64)
Memory used 534,096k (± 0.00%) 534,137k (± 0.00%) +40k (+ 0.01%) 534,092k 534,167k
Parse Time 2.58s (± 0.55%) 2.58s (± 0.49%) -0.00s (- 0.16%) 2.55s 2.61s
Bind Time 1.14s (± 0.70%) 1.14s (± 0.57%) -0.00s (- 0.35%) 1.12s 1.15s
Check Time 1.51s (± 0.53%) 1.50s (± 0.59%) -0.00s (- 0.33%) 1.49s 1.52s
Emit Time 0.07s (± 0.00%) 0.07s (± 0.00%) 0.00s ( 0.00%) 0.07s 0.07s
Total Time 5.30s (± 0.42%) 5.29s (± 0.40%) -0.01s (- 0.19%) 5.24s 5.34s
System
Machine Namets-ci-ubuntu
Platformlinux 4.4.0-210-generic
Architecturex64
Available Memory16 GB
Available Memory3 GB
CPUs4 × Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
Hosts
  • node (v14.15.1, x64)
Scenarios
  • Angular - node (v14.15.1, x64)
  • Compiler-Unions - node (v14.15.1, x64)
  • Monaco - node (v14.15.1, x64)
  • TFS - node (v14.15.1, x64)
  • material-ui - node (v14.15.1, x64)
  • xstate - node (v14.15.1, x64)
Benchmark Name Iterations
Current 48538 10
Baseline main 10

Developer Information:

Download Benchmark

@typescript-bot
Copy link
Collaborator

@ahejlsberg
Great news! no new errors were found between main..refs/pull/48538/merge

@Andarist
Copy link
Contributor

Andarist commented Apr 4, 2022

This one looks like a very nice improvement! Could you pack this? I would love to test this out easily with XState to see if it fixes some of our long-standing issues :)

@RyanCavanaugh
Copy link
Member

@typescript-bot pack this

@typescript-bot
Copy link
Collaborator

typescript-bot commented Apr 4, 2022

Heya @RyanCavanaugh, I've started to run the tarball bundle task on this PR at 900ef91. You can monitor the build here.

@typescript-bot
Copy link
Collaborator

typescript-bot commented Apr 4, 2022

Hey @RyanCavanaugh, I've packed this into an installable tgz. You can install it for testing by referencing it in your package.json like so:

{
    "devDependencies": {
        "typescript": "https://typescript.visualstudio.com/cf7ac146-d525-443c-b23c-0d58337efebc/_apis/build/builds/123449/artifacts?artifactName=tgz&fileId=37164A484BD05C07C037A0F5E5A96646A75717017850CD19A2D264A5C7846C4F02&fileName=/typescript-4.7.0-insiders.20220404.tgz"
    }
}

and then running npm install.


There is also a playground for this build and an npm module you can use via "typescript": "npm:@typescript-deploys/pr-build@4.7.0-pr-48538-10".;

@Andarist
Copy link
Contributor

Andarist commented Apr 4, 2022

I've checked locally that this PR also fixes this issue: #45255 cc @devanshj

@@ -29768,7 +29817,7 @@ namespace ts {
}

const thisType = getThisTypeOfSignature(signature);
if (thisType) {
if (thisType && couldContainTypeVariables(thisType)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these new guards on couldContainTypeVariables necessary for this to work, or are they an optimization?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Purely an optimization.

@DanielRosenwasser
Copy link
Member

DanielRosenwasser commented Apr 5, 2022

Okay, I think I understand a little better. The current behavior in inference is that as soon we find a contextually sensitive expression which grabs a type parameter for a contextual type, we would typically need to "fix" the inferences made so far in place. Those inferences come from inferring the entire given argument's type to an entire expected parameter's type. That has too large of a granularity.

This PR grabs contextually sensitive functions/methods encountered in array and object arguments as we're checking them and stashes them away. Then, if we ever run into a case where we're about to fix inferences for a type parameter, we try to make inferences from the stashed expressions to their (uninstantiated) contextual type just in case we can learn a bit more at a higher granularity.

Did I get that right?

@DanielRosenwasser
Copy link
Member

Is there a reason we need to stash them and can't just infer immediately? I assume you end up making worse inferences around this parameters occasionally, and maybe end up doing some duplicate work that might not be necessary.

getContextualTypeForObjectLiteralMethod(node as MethodDeclaration, ContextFlags.NoConstraints) :
getContextualType(node, ContextFlags.NoConstraints);
if (contextualType) {
inferTypes(context.inferences, type, contextualType);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this makes me think that we'll be redoing a lot of inferences - for a given inference set, we'll infer from each of the inner properties to their matching contextual type, then we'll do the outer inference for the object as a whole, which, in so doing, will redo these inferences again (and hopefully come to the same results). It's making me think that maybe we should have some kind of visited list for inferences in a given inference context, something that says "we've already inferred from A to B in this context, no need to do it again" - essentially invokeOnce, but shared to the entire inference pass.)

Copy link
Member

@DanielRosenwasser DanielRosenwasser Apr 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this makes me think that we'll be redoing a lot of inferences - for a given inference set, we'll infer from each of the inner properties to their matching contextual type, then we'll do the outer inference for the object as a whole, which, in so doing, will redo these inferences again

I think (see my comment here) that that's why this is only done when we're about to fix (which are cases that are broken today). So in the cases which don't work today, I think we'll effectively do inference twice in the worst-case (cases where you need to fix a type parameter on every context-sensitive expression - should be rare). Otherwise, it should be the same amount of inference as before, the only new work is collecting the context-sensitive inner expressions).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, storing these locations and doing the inference later will always result in inference occurring - that inner inference occurring simply changes the order of inferences (and allows some inferences to be locked in before sibling member inferences begin). We'll still ultimately back out to the expression as a whole and do inferences on its type, which will now sometimes have more inferences locked in from these inner inferences, and still do inference on that type and all its members, which includes all these inferences that we've already done.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There will be some amount of redoing of inferences, but deferring the work at least ensures we only do it if there's a chance it's needed and the whole scenario is generally pretty rare. I'm not too concerned with the repetition that occurs when we infer from the entire expression, nor am I too concerned with the ordering (we already have out-of-sequence ordering because we first infer from context insensitive arguments).

@@ -21746,6 +21749,37 @@ namespace ts {
}
}

function addIntraExpressionInferenceSite(context: InferenceContext, node: Expression | MethodDeclaration, type: Type) {
(context.intraExpressionInferenceSites ??= []).push({ node, type });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there are multiple nested active inference contexts (eg, foo(..., bar(..., baz(..., {...}))) might we not want to add the expression to all inference contexts? This way if a type parameter flows from foo into bar and then baz, but doesn't flow back out on construction (eg, because it's conditional'd away), we could still pick up the inner inference site.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, context sensitive arguments to an inner call might cause an outer type parameter to become fixed, but inferences from the inner argument expressions only occur through the return type of the inner argument expressions.

@DanielRosenwasser
Copy link
Member

I think we should get this in for 4.7.

@ahejlsberg ahejlsberg merged commit a3c2218 into main Apr 5, 2022
@ahejlsberg ahejlsberg deleted the fix47599 branch April 5, 2022 20:09
chunjin666 added a commit to chunjin666/docs that referenced this pull request Jul 24, 2022
The inference limit of TypeScript has been improved in TypeScript 4.7. MR:
microsoft/TypeScript#48538
NataliaTepluhina pushed a commit to vuejs/docs that referenced this pull request Jul 25, 2022
The inference limit of TypeScript has been improved in TypeScript 4.7. MR:
microsoft/TypeScript#48538
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Author: Team For Backlog Bug PRs that fix a backlog bug For Milestone Bug PRs that fix a bug with a specific milestone
6 participants