(All feedback/discussion should take place in the relevant issue.)
There's multiple requests for the ability to control a type at a much more fine grained level:
- #12424: Mapped conditional types
- #12885: Typing function overloads
- #12880: Bad inference for lambda closures
- Promises wrongfully accept thenables as their generic parameter (thenables can never be the argument to a
then
callback). - #13235: Checking for unsafe function overloads
- #8655: Control flow based type narrowing for
assert(...)
calls - #4183, #12215 (partial): Subtraction types
- #14466: Existential type
- #12342 (partial): represent Function via generic type parameterized by its return/parameter/this type
- #12936: Exact Types
- #15048 (partial): Suggestion: one-sided or fine-grained type guards
I feel all of these, except only partially subtraction types, could be solved more generally by adding a set of constraint types to the type system.
Here's a high level summary of the proposed syntax, for those of you who aren't as interested in the edge cases.
value is Type
- Just like the return-only type you know now, but it's now available everywhere.Type extends Parent
- Returns whetherType
is a subtype ofParent
, and is primarily useful in generics.constraint Type
- WhetherType
is one of the above two, or a constraint type that can only evaluate to one of the above two. It's primarily useful in genericif
guards.
The two completely new types are primarily useful for generics and exist to make constraint types simpler and more flexible.
It's a series of conditions with associated types. Think of it as almost a switch (true) { ... }
, where each case is a condition. Or, if you've ever used CoffeeScript, it's also similar to their switch
expressions without the condition.
let value = 2
type Type = boolean
type BooleanEquiv = [
// `value` is a `number`, so this is skipped.
case value is string: number,
// `Type` is `boolean`, and this case is negated, so this is used.
case !(Type extends string): boolean,
// You can use mapped cases in case you ever need to bring a free
// generic type to do something more complicated like manipulate
// item types within an array.
case U in Type extends U[]: U,
// `default` is the same catch-all you'd expect in a `switch`, but
// it defaults to `default: throw` instead since you usually *don't*
// want to allow anything that isn't allowed.
default: string,
]
You can also constrain to a specific set of types. For example, you could omit a specific method via a constraint:
interface Omit<T, K extends keyof T> {
// How this works is that for each property in `T`, you
// only keep keys and values that are *not* in `K`.
// `default` isn't needed since you assume that anything
// else is invalid.
[P in keyof T]: [case !(P extends K): T[P]],
}
This is for cases where you need to constrain a type in ways you can't already with Foo extends Bar
. For example, you could require that a type only has specific keys (no more or less) using this:
function foo<T if T == {key: string, value: number}>(value: T) {
const keys = Object.keys(value) as Array<"key" | "value">;
// ...
}
The Promise<T>
change example later is another big use case of this - it enables Promises to be much better typed.
Incidentally, you could even do this, although it's a bit redundant:
// Both are equivalent
declare function foo<T extends Foo>(value: T): T;
declare function foo<T if T extends Foo>(value: T): T;
Any time you want to perform a compile time assert, you can use static assert
. It's great for asserting simple things like that a non-terminating switch
is comprehensive or that a type actually resolves to what you expected (for less confusing errors).
switch (value) {
case Enum.Foo: // do things
case Enum.Bar: // do things
// more cases...
default: static assert value is never;
}
doOneLastThing();
return result;
If you've ever used C++, it's functionally exactly like static_assert<cond>
.
Here's how you could potentially express disjoint types:
// Logical exclusive or
type Xor<A if constraint A, B if constraint B> = [case A: !B, default: B];
// Disjoint Type
type Disjoint<A, B> = [case U in Xor<U extends A, U extends B>: U];
Here's an example usage of that type at the value level, to address overloading:
class Foo { foo: number }
class Bar { bar: number }
declare function foo(str: string, length: number): Foo;
declare function foo(num: number, length: number): Bar;
declare function getType(): string | number;
// This type can be inferred, and is the return type of `foo` in
// this context.
type ResultType = [
case value is string: Foo,
case value is number: Bar,
];
const value: string | number = getType();
const result: ResultType = foo(value, 1);
if (result instanceof Foo) {
// `value` is a string, because `result` is `Foo`
console.log(value.slice(1));
} else {
// `value` is a number, because `result` is `Bar`
console.log(Math.abs(value));
}
// Constraint types are assignable to the unbound supertype,
// but not vice versa.
const other: Foo | Bar = result;
const unified: ResultType = other; // Error!
Here's an example usage at the type level, to limit a type to a non-thenable:
// Using a single constraint + default types to correct the Promise type.
// `T` can be anything but a thenable.
type ReduceThenable<T> = [
case U in T extends Thenable<U>: ReduceThenable<U>,
default: U,
];
declare class Promise<T if !(T extends Thenable<any>)> {
static resolve<T>(value: T): Promise<ReduceThenable<T>>;
}
Here's an example applying it with a combination of mapped and indexed types to guard Knockout typings.
interface Item {
id: number;
name: string;
subitems: string[];
}
interface KnockedOut<T> {
[P in keyof T]: [
case U in Item[P] extends U[]: KnockoutObservableArray<U>,
default: KnockoutObservable<Item[P]>,
];
};
type KoItem = KnockedOut<Item>;
My formal proposal includes a few key provisions for manipulating types and verifying constraints, along with associated rules on how they resolve. They allow for significant formal verification, and they reify limited control flow within the type system.
If you're into type theory, this is actually a small subset of general refinement types, using existential types to enhance its power. It can't refine "x
less than zero", though.
There are three primary additions to this: constraint type expressions, a new if
type constraint, and type bound expressions.
Constraint types are a set of related bounded types optionally depending on a free variable (either a value or type), and type narrowing decides how it's expressed. It is expressed syntactically as a union of one or more constraints.
Here's how constraint types would work:
[ case <constraint> : <type> , case <constraint> : <type>, ..., default : <type> ]
(Yes, trailing commas are allowed.)
On the left hand side, each constraint may be syntactically one of the following:
- A default case:
default:
(This must be the last case if given) - A mapped default case:
default U in:
(This must be the last case if given.) - A constraint case where
constraint Type
resolves totrue
for the given type:case T extends SomeType:
- A mapped constraint case where
constraint Type
resolves totrue
for the given type:case U in T extends SomeType:
2 and 4 are a lot like how P
is in {[P in T]: any}
, and in particular, [default T in: T]
is assignable to {} | void
, and thus is another way of representing the global supertype. Hence why it's called a mapped case as opposed to a normal case.
On the right hand side, each type is one of two things:
- A single type expression.
- The special
throw
impossibility keyword, optionally with a string/template assertion. If it's a string, the compiler should print that message on error, and if it's a template, it should insert the resolved type name in each interpolation.
Type extends Parent
value is Type
constraint Type
Each of these are type expressions that statically evaluate to either the type true
or false
, depending on their condition:
Type extends Parent
evaluates totrue
ifType
is a subtype ofParent
,false
otherwise. This affects type narrowing within constraint type expressions.value is Type
evaluates totrue
ifvalue
is assignable toType
,false
otherwise. This affects type narrowing withinif
,switch
, and friends.constraint Type
evaluates totrue
ifType
is either a type constraint type or literaltrue
/false
,false
otherwise. This affects type narrowing within constraint type expressions.
Here's how if
constraints would work:
function func<T if Condition>(...args: any[]): void;
interface Foo<T if Condition> {}
class Foo<T if Condition> {}
// etc.
The left hand side is a type name, and the right hand side is any type expression Type
where constraint Type
resolves to true
. This facilitates easier checking, and allows certain productions to be invalidated at a finer grained level.
static assert value is Foo;
static assert Foo extends Bar;
The sole expression is any type expression Type
where constraint Type
returns true
. It enables flexible compile-time type verification without incurring any runtime cost and without requiring code generation.
- If no
default:
is given in a constraint type, it's to be treated as if it ended withdefault: throw
. - Near the bottom, I have a pseudo-BNF grammar summary.
Also, I'm open to using other symbols, markers, or keywords for the impossibility assertion.
Constraint types resolve to the type of the first constraint to match true
, and partial matching results in constraint narrowing. For example:
enum E { Foo, Bar }
type Foo<T> = [
case T extends number: string,
case T extends E: void,
case T extends number | string: number,
];
let value: Foo<number>; // Resolves to `string`
let value: Foo<E>; // Resolves to `number` is met before `E`
let value: Foo<string>; // Resolves to `number`
If the first matched constraint resolves to throw
or if no constraint is matched, the entire expression is considered incompatible. Otherwise, it resolves to the first constraint's type.
As an example:
type Foo<T> = [
case T extends number: throw,
case T extends string: T,
];
let value: Foo<string>; // Okay, equivalent to `string`
let value: Foo<number>; // Error
When constraints use type predicates, they alter the type flow within control flow structures. When the type predicate's binding is narrowed, the corresponding type is also similarly narrowed using the constraint type. For example:
let value: number | string;
let other: [case value is number: string, case value is string: boolean];
if (typeof value === "number") {
// `other` is of type `string`, following the left part of the constraint
} else {
// `other` is of type `boolean`, because of the right part of the constraint
}
When types are found to match a constraint, it is also narrowed. The following code should validate due to type narrowing:
type AssertBool<T extends boolean> = T;
type Foo<T> = [
// This case checks.
case T extends boolean: AssertBool<T>,
default: never,
];
In addition, type narrowing and resolution (where possible) is done before checking type compatibility. This is to ensure the following two types are compatible:
type Foo<K extends number | string> = [
case K extends string: FooString,
case K extends number: FooNumber,
];
type Foo<K extends number | string> = [
case K extends string: FooString,
default: FooNumber,
];
Constraint types may be used recursively if and only if all the following conditions are met:
- The current generics are not assignable to the passed generics.
- At least one current generic is constrained.
- The recursion is performed in the case with the type constraint.
- After narrowing, the new argument type for the same position is assignable to either:
- The constrained generic type.
- Any property of the constrained generic, possibly nested.
- Any function argument of the constrained generic, possibly nested.
These rules also are applied in nested constraint types, so recursion can be done at a depth, so multiple constraints can be asserted before recursing.
The rationale for allowing such limited recursion is for multiple reasons:
- It allows deductive recursion, required for properly typing Promises.
- It ensures recursive constraints remain strongly normalizing to avoid Turing completeness, avoiding issues like this.
- It is statically verifiable within the type itself.
Constraint types can attach their bindings to parameters within functions and lambdas as well as local values. Here's an example:
let func: (value: number | string) => [
case value is number: string,
case value is string: boolean,
];
func(2); // returns type `string`
func("hello!"); // returns type `boolean`
This leads to equivalence with overloads. The above lambda could have equivalently been defined this way:
declare function func(value: number): string;
declare function func(value: string): boolean;
When functions return a constraint type, the binding for the callee's relevant return constraints carries over to the corresponding argument for the callee. To clarify this, here's what I mean:
let func: (value: number | string, extra: any) => [
case value is number: string,
case value is string: boolean,
];
// The corresponding inferrable return type
let arg: number | string;
let ret: [
case arg is number: string,
case arg is string: boolean,
] = func(arg, "foo");
Constraints may choose to mix and match both type predicates and type parameters/constraints:
// This is perfectly legal
let func: <T>(value: number | string, extra: T) => [
case value is number: string,
case T extends string: boolean,
];
let arg: number | string;
let ret: [
case arg is number: string,
default: boolean,
] = func(arg, "foo");
let ret = func(arg, new Map()); // Error - no constraints match
The following additional type-level operators are proposed, with equivalent desugarings:
// Pseudocode format: `operator<T> op T = Value<T>;
// Parameters should be macro-expanded, not filled in.
// Types are capitalized, bindings are lowercased.
!T = T extends false;
T && U = [case T: U, default: false];
T || U = [case T: true, default: U];
T == U = (T extends U && U extends T);
T != U = !(T == U);
!T
requires thatconstraint T
resolves totrue
.T && U
andT || U
require thatconstraint T
andconstraint U
both resolve totrue
.
Some existing syntax and other existing proposals could also in theory be done/redone as syntax sugar built on this, with identical semantics.
// Existing
T & U = [case V in V extends T && V extends U: V];
T | U = [case V in V extends T || V extends U: V];
// Proposed
returnof T = [case R in T extends (...args: any[]) => R: R];
binding as T = binding is T || [default U in: binding is T | U];
T - U = [case V in V extends T && !(V extends Partial<U>): V];
T ^ U = [case V in [case V extends T: !(V extends U), default: V extends U]: V];
There are also a few immediately usable types I propose should be added:
type Assert<T, Cond if constraint Cond> = [
case Cond: T,
default: throw `Assertion failed: ${cond}`,
];
type Exact<T> = [
case U in T == U: T,
default U in: throw `Expected ${U} == ${T}`,
];
Also, Promise<T>
should leverage the new type changes as specified previously. (PromiseLike<T>
should remain as-is, to account for misbehaving thenables.)
-
Syntax: There exist no unresolvable conflicts as far as I can tell apart from requiring no line terminator after
constraint
and beforeis
,extends
, andstatic
to work around ASI. The token sequences[ case
and[ default
are the only way to start constraint types, and are impossible in currently valid TypeScript. As for the other type expressions:Type extends (Foo)
andstatic assert (Cond)
are only valid when a newline occurs beforeextends
.constraint (Type)
is never valid in type expressions without a newline after the keyword.value is (Type)
can have the same disambiguation rules applied as already the case in return type positions.if
is currently unambiguously invalid as a type constraint.!
is currently unambiguously invalid at the start of a type expression.&&
,||
,==
, and!=
are all unambiguously invalid after a type expression.T - U
can follow the same ASI disambiguation as is already done at the expression level.
-
Semantics: It only adds new semantics to
value is Type
, and does not break the existing use case for custom type guards. The one breaking change would be strengthening thePromise<T>
type. -
Emit: This proposal has no emit component.
-
JavaScript compatibility: N/A.
-
Compiler speed: It may slow the compiler down some, but only marginally without the direct use of constraint types. It's effectively reifying the type flow, so it already has a basis to start, and much of the legwork likely has already been addressed.
-
IDE support: It will piggyback off of type flow tracking, while enhancing it, so it may have a positive impact. Constraint types and function overloads can still remain separate for purposes of the language service, while sharing a common core.
Here's a quick pseudo-BNF summary of the syntax:
<constraint_type> ::
[ <constraint_items> ]
[ <constraint_items> , ]
<constraint_items> ::
<constraint_case> <constraint_result>
<constraint_default> <constraint_result>
<constraint_case> <constraint_result> , <constraint_items>
<constraint_case> ::
case <type> :
case <identifier> in <type> :
<constraint_default> ::
default :
default <identifier> in :
<constraint_result> ::
<type>
throw
throw <string>
throw <template_string>
<type> ::
any existing valid type
<identifier> is <type>
<type> extends <type>
constraint <type>
! <type>
<type> && <type>
<type> || <type>
<type> == <type>
<type> != <type>
<type_constraint> ::
any existing type constraint
<identifier> if <type>