Archived: Web Platform Design Principles

This is a simplified archive of the page at

Use this page embed on your own site:

The Design Principles are directly informed by the ethical frameworkset out in the Ethical Web Principles [ETHICAL-WEB].These principles provide concrete practical advicein response to the higher level ethical responsibilitiesthat come with developing the web platform.


1. Principles behind design of Web APIs

The Design Principles are directly informed by the ethical framework set out in the Ethical Web Principles [ETHICAL-WEB]. These principles provide concrete practical advice in response to the higher level ethical responsibilities that come with developing the web platform.

1.1. Put user needs first (Priority of Constituencies)

If a trade-off needs to be made, always put user needs above all.

Similarly, when beginning to design an API, be sure to understand and document the user need that the API aims to address.

The internet is for end users: any change made to the web platform has the potential to affect vast numbers of people, and may have a profound impact on any person’s life. [RFC8890]

User needs come before the needs of web page authors, which come before than the needs of user agent implementors, which come before than the needs of specification writers, which come before theoretical purity.

Like all principles, this isn’t absolute. Ease of authoring affects how content reaches users. User agents have to prioritize finite engineering resources, which affects how features reach authors. Specification writers also have finite resources, and theoretical concerns reflect underlying needs of all of these groups.

See also:

1.2. It should be safe to visit a web page

When adding new features, design them to preserve the user expectation that visiting a web page is generally safe.

The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.

For example, an API which allows any website to detect the use of assistive technologies may make users of these technologies feel unsafe visiting unknown web pages, since any web page may detect this private information.

If users have a realistic expectation of safety, they can make informed decisions between Web-based technologies and other technologies. For example, users may choose to use a web-based food ordering page, rather than installing an app, since installing a native app is riskier than visiting a web page.

To work towards making sure the reality of safety on the web matches users' expectations, we can take complementary approaches when adding new features:

  • We can improve the user interfaces through which the Web is used to make it clearer what users of the Web should (and should not) expect;

  • We can change the technical foundations of the Web so that they match user expectations of privacy;

  • We can consider the cases where users would be better off if expectations were higher, and in those cases try to change both technical foundations and expectations.

A new feature which introduces safety risks may still improve user safety overall, if it allows users to perform a task more safely on a web page than it would be for them to install a native app to do the same thing. However, this benefit needs to be weighed against the common goal of users having a reasonable expectation of safety on web pages.

See also:

1.3. Trusted user interface should be trustworthy

Consider whether new features impact trusted user interfaces.

Users depend on trusted user interfaces such as the address bar, security indicators and permission prompts, to understand who they are interacting with and how. These trusted user interfaces must be able to be designed in a way that enables users to trust and verify that the information they provide is genuine, and hasn’t been spoofed or hijacked by the website.

If a new feature allows untrusted user interfaces to resemble trusted user interfaces, this makes it more difficult for users to understand what information is trustworthy.

For example, JavaScript alert() allows a page to show a modal dialog which looks like part of the browser. This is often used to attempt to trick users into visiting scam websites. If this feature was proposed today, it would probably not proceed.

If a useful feature has the potential to cause harm to users, make sure that the user can give *meaningful consent* for that feature to be used, and that they can refuse consent effectively.

In order to give meaningful consent, the user must:

  • understand what permission they may choose whether to grant the web page

  • be able to choose to give or refuse that permission effectively.

If a feature is powerful enough to require a user consent, but it’s impossible to explain to a typical user what they are consenting to, that’s a signal that you may need to reconsider the design of the feature.

If a permission prompt is shown, and the user doesn’t grant permission, the Web page should not be able to do anything that the user believes they have refused consent for.

By asking for consent, we can inform the user of what capabilities the web page does or doesn’t have, reinforcing their confidence that the web is safe. However, the user benefit of a new feature must justify the additional burden on users to decide whether to grant permission for each feature whenever it’s requested by a Web page.

For example, the Geolocation API grants access to a user’s location. This can help users in some contexts, like a mapping application, but may be dangerous to some users in other contexts - especially if used without the user’s knowledge. So that the user may decide whether their location may be used by a Web page, a permission prompt should be shown to the user asking whether to grant location access. If the user refuses permission, no location information is available to the Web page.

See also:

1.5. Support the full range of devices and platforms (Media Independence)

As much as possible, ensure that features on the web work across different input and output [devices, screen sizes, interaction modes, platforms, and media] (

One of the main values of the Web is that it’s extremely flexible: a Web page may be viewed on virtually any consumer computing device at a very wide range of screen sizes, may be used to generate printed media, and may be interacted with in a large number of different ways. New features should match the existing flexibility of the web platform.

This doesn’t imply that features which don’t work in every possible context should be excluded. For example, hyperlinks can’t be visited when printed on paper, and the click event doesn’t translate perfectly to touch input devices where positioning and clicking the pointer occur in the same gesture (a "tap").

These features still work across a wide variety of contexts, and can be adapted to devices that don’t support their original intent - for example, a tap on a mobile device will fire a click event as a fallback.

Features should also be designed so that the easiest way to use them maintains flexibility.

The 'display: block', 'display: flex', and 'display: grid' layout models in CSS all default to placing content within the available space and without overlap, so that it works across screen sizes, and allows users to choose their own font and font size without causing text to overflow.

2. API Design Across Languages

2.1. Prefer simple solutions

Look hard for simple solutions to the user needs you intend to address.

Simple solutions are generally better than complex solutions, although they may be harder to find. Simpler features are easier for user agents to implement and test, more likely to be interoperable, and easier for authors to understand. It is especially important to design your feature so that the most common use cases are easy to accomplish.

Make sure that your user needs are well-defined. This allows you to avoid scope creep, and make sure that your API does actually meet the needs of all users. Of course, complex or rare use cases are also worth solving, though their solutions may be more complicated to use. As Alan Kay said, "simple things should be simple, complex things should be possible."

Do note however that while common cases are often simple, commonality and complexity are not always correlated.

Sanitizing HTML to prevent XSS attacks is a complex process that requires extensive security knowledge, however the Sanitizer API provides a shortcut for this common use case. It also permits simpler types of filtering, but with more configuration.

See also:

2.2. Resolving tension between high level and low level APIs

High-level APIs allow user agents more ability to intervene in various ways on behalf of the user, such as to ensure accessibility, privacy, or usability.

A font picker (high level API) was recommended by the TAG over a Font Enumeration API (low level API) as it addresses the bulk of use cases, while preserving user privacy, as it is free from the the fingerprinting concerns that accompany a general Font Enumeration API. A native font picker also comes with accessibility built-in, and provides consistency for end users.

Low-level APIs afford authors room for experimentation so that high level APIs can organically emerge from usage patterns over time. They also provide an escape hatch when the higher-level API is not adequate for the use case at hand.

Lower level building blocks cannot always be exposed as Web APIs. A few possible reasons for this are to preserve the user’s security and privacy, or to avoid tying Web APIs to specific hardware implementations. However, high level APIs should be designed in terms of building blocks over lower level APIs whenever possible. This may guide decisions on how high level the API needs to be.

A well-layered solution should ensure continuity of the ease-of-use vs power tradeoff curve and avoid sharp cliffs where a small amount of incremental use case complexity results in a large increase of code complexity.

2.3. Name things thoughtfully

Name APIs with care. Naming APIs well makes it much easier for authors to use them correctly.

See the more detailed Naming principles section for specific guidance on naming.

2.4. Consistency

It is good practice to consider precedent in the design of your API and to try to be consistent with it.

There is often a tension between API ergonomics and consistency, when existing precedent is of poor usability. In some cases it makes sense to break consistency to improve usability, but the improvement should be very significant to justify this.

Since the web platform has gradually evolved over time, there are often multiple conflicting precedents which are mutually exclusive. You can weigh which precdent to follow by taking into account prevalence (all else being equal, follow the more popular precedent), API ergonomics (all else being equal, follow the more usable precedent), and API age (all else being equal, follow the newer precedent).

There is often a tension between internal and external consistency. Internal consistency is consistency with the rest of the system, whereas external consistency is consistency with the rest of the world. In the web platform, that might materialize in three layers: consistency within the technology the API belongs to (e.g. CSS), consistency with the rest of the web platform, and in some cases external precedent, when the API relates to a particular specialized outside domain. In those cases, it is useful to consider what the majority of users will be. Since for most APIs the target user is someone who is familiar with the technology they are defined in, err on the side of favoring consistency with that.

One example is Lab colors: It would be more consistent with the rest of CSS to use percentages for L (0%-100%), but more consistent with the rest of Color Science to use a unitless number (0-100). There was a lot of heated debate, which resolved in favor of percentages, i.e. consistency within CSS.

There is also a separate section on naming consistency.

2.5. New features should be detectable

Provide a way for authors to programmatically detect whether your feature is available, so that web content may gracefully handle the feature not being present.

An existing feature may not be available on a page for a number of reasons. Two of the more common reasons are because it hasn’t been implemented yet, or because it’s only available in secure contexts.

Authors shouldn’t need to write different code to handle each scenario. That way, even if an author only knows or cares about one scenario, the code will handle all of them.

When a feature is available but isn’t feasible to use because a required device isn’t present, it’s better to expose that the feature is available and have a separate way to detect that the device isn’t. This allows authors to handle a device not being available differently from the feature not being available, for example by suggesting the user connect or enable the device.

See § 9.2 Use care when exposing APIs for selecting or enumerating devices.

Authors should always be able to detect a feature from JavaScript, and in some cases the feature should also be detectable in the language where it’s used (such as @supports in CSS).

In some cases, it may not be appropriate to allow feature detection. Whether the feature should be detectable or not should be based on the user need for the feature: if there is a user need or design principle which would fail if feature detection were available for the feature, then you should not support feature detection.

Also, if a feature is generally not exposed to developers, it is not appropriate to support feature detection. For example, private browsing mode is a concept which is recognised in web specifications, but not exposed to authors. For private browsing mode to support the user’s needs, it must not be feature detected.

See also:

2.6. Consider limiting new features to secure contexts

Always limit your feature to secure contexts if it would pose a risk to the user without the authentication, integrity, or confidentiality that’s present only in secure contexts.

One example of a feature that should be limited to secure contexts is Geolocation, since it would be a risk to users' privacy to transmit their location in an insecure way.

For other features, TAG members past and present haven’t reached consensus on general advice. Some believe that all new features (other than features which are additions to existing features) should be limited to secure contexts. This would help encourage the use of HTTPS, helping users be more secure in general.

Others believe that features should only be limited to secure contexts if they have a known security or privacy impact. This lowers the barrier to entry for creating web pages that take advantage of new features which don’t impact user security or privacy.

Specification authors can limit most features defined in Web IDL, to secure contexts by using the [SecureContext] extended attribute on interfaces, namespaces, or their members (such as methods and attributes).

However, for some types of API (e.g., dispatching an event), limitation to secure contexts should just be defined in normative prose in the specification. If this is the case, consider whether there might be scope for adding a similar mechanism to [SecureContext] to make this process easier for future API developers.

As described in § 2.5 New features should be detectable, you should provide a way to programmatically detect whether a feature is available, including cases where the feature is unavailable because the context isn’t secure.

However, if, for some reason there is no way for code to gracefully handle the feature not being present, limiting the feature to secure contexts might cause problems for code (such as libraries) that may be used in either secure or non-secure contexts.

2.7. Don’t reveal that private browsing mode is engaged

Make sure that your feature doesn’t give authors a way to detect private browsing mode.

Some people use private browsing mode to protect their own personal safety. Because of this, the fact that someone is using private browsing mode may be sensitive information about them. This information may harm people if it is revealed to a web site controlled by others who have power over them (such as employers, parents, partners, or state actors).

Given such dangers, websites should not be able to detect that private browsing mode is engaged.

User Agents which support IndexedDB should not disable it in private browsing mode, because that would reveal that private browsing mode is engaged

See also:

2.8. Consider how your API should behave in private browsing mode

If necessary, specify how your API should behave differently in private browsing mode.

For example, if your API would reveal information that would allow someone to correlate a single user’s activity both in and out of private browsing mode, consider possible mitigations such as introducing noise, or using permission prompts to give the user extra information to help them meaningfully consent to this tracking (see § 1.4 Ask users for meaningful consent when appropriate).

Private browsing modes enable users to browse the web without leaving any trace of their private browsing on their device. Therefore, APIs which provide client-side storage should not persist data stored while private browsing mode is engaged after it’s disengaged. This can and should be done without revealing any detectable API differences to the site.

User Agents which support localStorage should not persist storage area changes made while private browsing mode is engaged.

If the User Agent has two simultaneous sessions with a site, one in private browsing mode and one not, storage area changes made in the private browsing mode session should not be revealed to the other browsing session, and vice versa. (The storage event should not be fired at the other session’s window object.)

See also:

2.9. Don’t reveal that assistive technologies are being used

Make sure that your API doesn’t provide a way for authors to detect that a user is using assistive technology without the user’s consent.

The web platform must be accessible to people with disabilities. If a site can detect that a user is using an assistive technology, that site can deny or restrict the user’s access to the services it provides.

People who make use of assistive technologies are often vulnerable members of society; their use of assistive technologies is sensitive information about them. If an API provides access to this information without the user’s consent, this sensitive information may be revealed to others (including state actors) who may wish them harm.

Sometimes people propose features which aim to improve the user experience for users of assistive technology, but which would reveal the user’s use of assistive technology as a side effect. While these are well intentioned, they violate § 1.2 It should be safe to visit a web page, so alternative solutions must be found.

See also:

2.10. Support non-"fully active" documents

After a user navigated away from a document, the document might be cached in a non-fully active state, and might be reused when the user navigates back to the entry holding the document, which makes navigation fast for users. In browsers, this is known as the back/forward cache, or "BFCache" for short. In the past, many APIs have missed specifying support for non-fully active documents, making them hard to support in various user agents to cache pages in the BFCache, effectively making the user experience of navigating back and forth less optimal.

To avoid this happening with your API, you should specify support for non-fully active documents by following these guidelines:

Note: It is possible for a document to not become fully active for other reasons not related to caching, such as when the iframe holding the document gets detached. Some advices below might not be relevant for those cases, since the document will never return to fully active again.

2.10.1. Gate actions with fully active checks

When performing actions that might update the state of a document, be aware that the document might not be fully active and is considered as "non-existent" from the user’s perspective. This means they should not receive updates or perform actions.

Note: It is possible for a fully active document to be perceived as "non-existent" by users, such as when the document is displaying prerendered content. These documents might behave differently than non-fully active documents, and the guidelines here might not be applicable to them, as it is written only for handling non-fully active documents.

In many cases, anything that happens while the document is not fully active should be treated as if it never happened. If it makes more sense to "update" a document to ensure it does not hold stale information after it becomes fully active again, consider the § 2.10.2 Listen for changes to fully active status pattern below.

APIs that periodically send information updates, such as Geolocation API’s watchPosition() should not send updates if the document is no longer fully active. They also should not queue those updates to arrive later, and only resume sending updates when the document becomes active again, possibly sending one update with the latest information then.

2.10.2. Listen for changes to fully active status

When a document goes from fully active to non-fully active, it should be treated similarly to the way discarded documents are treated. The document must not retain exclusive access to shared resources and must ensure that no new requests are issued and that connections that allow for new incoming requests are terminated. When a document goes from non-fully active to fully active again, it can restore connections if appropriate.

While web authors can manually do cleanup (e.g. release the resources, sever connections) from within the pagehide event and restore them from the pageshow event themselves, doing this automatically from the API design allows the document to be kept alive after navigation by default, and is more likely to lead to well-functioning web applications.

APIs that create live connections can pause/close the connection and possibly resume/reopen it later. It’s also possible to let the connection stay open to complete existing ongoing requests, and later update the document with the result when it gets restored, if appropriate (e.g. resource loads).

APIs that hold non-exclusive resources may be able to release the resource when the document becomes not fully active, and re-acquire them when it becomes fully active again (Screen Wake Lock API is already doing the first part).

Note: this might not be appropriate for all types of resources, e.g. if an exclusive lock is held, we cannot just release it and reacquire when fully active since another page could then take that lock. If there is an API to signal to the page that this has happened, it may be acceptable but beware that if the only time this happens is with BFCache, then it’s likely many pages are not prepared for it. If it is not possible to support BFCache, follow the § 2.10.4 Discard non-fully active documents for situations that can’t be supported pattern described below.

Additionally, when a document becomes fully active again, it can be useful to update it with the current state of the world, if anything has changed while it is in the non-fully active state. However, care needs to be taken with events that occurred while in the BFCache. When not fully active, for some cases, all events should be dropped, in some the latest state should be delivered in a single event, in others it may be appropriate to queue events or deliver a combined event. The correct approach is case by case and should consider privacy, correctness, performance and ergonomics.

Note: Making sure the latest state is sent to a document that becomes fully active again is especially important when retrofitting existing APIs. This is because current users of these APIs expect to always have the latest information. Dropping state updates can leave the document with stale information, which can lead to unexpected and hard-to-detect breakage of existing sites.

The gamepadconnected event can be sent to a document that becomes fully active again if a gamepad is connected while the document is not fully active. If the gamepad was repeatedly connected and disconnected, only the final connected event should be delivered. (This is not specified yet, see issue)

For geolocation or other physical sensors, no information about what happened while not fully active should be delivered. The events should simply resume from when the document became fully active. However, these APIs should check the state when the document becomes fully active again, to determine if a status update should be sent (e.g. is the current location far away from the location when the document becomes not fully active?), to ensure the document has the latest information, as guaranteed by the API normally.

For network connections or streams, the data received while not fully active should be delivered only when the document becomes fully active again, but whereas a stream might have created many events with a small amount of data each, it could be delivered as smaller number of events with more data in each.

2.10.3. Omit non-fully active documents from APIs that span multiple documents

Non-fully active documents should not be observable, so APIs should treat them as if they no longer exist. They should not be visible to the "outside world" through document-spanning APIs (e.g. clients.matchAll(), window.opener).

Note: This should be rare since cross-document-spanning APIs are themselves relatively rare.

2.10.4. Discard non-fully active documents for situations that can’t be supported

If supporting non-fully active documents is not possible for certain cases, explicitly specify it by discarding the document| if the situation happens after the user navigated away, or setting the document’s salvageable bit to false if the situation happens before or during the navigation away from the document, to cause it to be automatically discarded after navigation.

Note: this should be rare and probably should only be used when retrofitting old APIs, as new APIs should always strive to work well with BFCache.

Calling clients.claim() should not wait for non-fully active clients, instead it should cause the non-fully active client documents to be discarded. (This is currently not specified, see issue)

2.10.5. Be aware that per-document state/data might persist after navigation

As a document might be reused even after navigation, be aware that tying something to a document’s lifetime also means reusing it after navigations. If this is not desirable, consider listening to changes to the fully active state and doing cleanup as necessary (see above).

Sticky activation is determined by the "last activation timestamp", which is tied to a document. This means after a user triggers activation once on a document, the document will have sticky activation forever, even after the user navigated away and back to it again. Whether this should actually be reset when full activity is lost or not is still under discussion.

2.11. Prioritize usability over compatibility with third-party tools

Design new features with usability as the primary goal, and compatibility with third-party tooling as a secondary goal.

The web platform benefits from a wide ecosystem of tooling to facilitate easier and faster development. A lot of the time, the syntax of an upcoming web platform feature may conflict with that of a third-party tool causing breakage. This is especially common as third-party tools are often used to prototype new web platform features.

In general, web platform features last a lot longer than most third-party tools, and thus giving them the optimal syntax and functionality should be of high priority.

In some cases, the conflict will introduce problems across a large number of web sites, necessitating the feature’s syntax to be redesigned to avoid clashes.

Array.prototype.contains() had to be renamed to Array.prototype.includes() to avoid clashes with the identically named but incompatible method from PrototypeJS, a library that was in use in millions of websites.

However, these cases should be exceptions.

When deciding whether to break third party tools with new syntax, there are several factors to consider, such as severity of the breakage, popularity of the third party tool, and many more.

Possibly the most important factor is how severely would the usability of the web platform feature be compromised if its syntax was changed to avoid breaking the third party tool? If several alternatives of similar usability are being considered, it is usually preferable to prioritize the ones that inconvenience third party tools the least.

When the CSS WG was designing CSS Grid Layout, square brackets were chosen instead of parentheses for naming grid tracks to avoid breaking Sass, a popular preprocessor.

However, if avoiding breaking the third party tool would lead to a significant negative impact on of the feature’s usability, that is rarely an acceptable tradeoff, unless it causes significant breakage of live websites.

Languages should also provide mechanisms for extensibility that authors can use to extend the language without breaking future native functionality, to reduce such dilemmas in the future.


This section details design principles for features which are exposed via HTML.

3.1. Re-use attribute names (only) for similar functionality

If you are adding a feature that is specified through an HTML attribute, check if there is an existing attribute name on another element that specifies similar functionality. Re-using an existing attribute name means authors can utilize existing knowledge, maintains consistency across the language, and keeps its vocabulary small.

The same attribute name, multiple, is used on both select to allow selection of multiple values, as well as on input to allow entry of multiple values.

The open attribute was introduced on the details element, and then re-used by dialog.

If you do re-use an existing attribute, try to keep its syntax as close as possible to the syntax of the existing attribute.

The for attribute was introduced on the label element, for specifying which form element it should be associated with. It was later re-used by output, for specifying which elements contributed input values to or otherwise affected the calculation. The syntax of the latter is broader: it accepts a space-separated list of ids, whereas the former only accepts one id. However, they both still conform to the same syntax, whereas e.g. if one of them accepted a list of ids, and the other one a selector, that would be an antipattern.

The inverse also applies: do not re-use an existing attribute name if the functionality you are adding is not similar to that of the existing attribute.

The type attribute is used on the input and button elements to further specialize the element type, whereas on every other element (e.g. link, script, style) it specifies MIME type. This is an antipattern; one of these groups of attributes should have had a different name.

3.2. Do not pause the HTML parser

Ensure that your design does not require HTML parser to pause to handle external resources.

As a page parses, the browser discovers assets that the page needs, and figures out a priority in which they should be loaded in parallel. Such parsing can be disrupted by a resource which blocks the discovery of subsequent resources. At worst, it means the browser downloads items in series rather than parallel. At best, it means the browser queues downloads based on speculative parsing, which may turn out to be incorrect.

Features that block the parser generally do so because they want to feed additional content into the parser before subsequent content. This is the case of legacy <script src="…"> elements, which can inject into the parser using document.write(…). Due to the performance issues above, new features must not do this.

3.3. Avoid features that block rendering

Features that require resource loading or other operations before rendering the page, often result in blank page (or the previous page). The result is a poor user experience.

Consider adding such features only in cases when the overall user experience is improved. A canonical example of this is blocking rendering in order to download and process a stylesheet. The alternative user experience is a flash of unstyled content, which is undesirable.

4. Cascading Style Sheets (CSS)

This section details design principles for features which are exposed via CSS.

4.1. Separate CSS properties based on what should cascade separately

Decide which values should be grouped together as CSS properties and which should be separate properties based on what makes sense to set independently.

CSS cascading allows declarations from different rules or different style sheets to override one another. A set of values that should all be overridden together should be grouped together in a single property so that they cascade together. Likewise, values that should be able to be overridden independently should be separate properties.

For example, the "size" and "sink" aspects of the initial-letter property belong in a single property because they are part of a single initial letter effect (e.g., a drop cap, sunken cap, or raised cap).

However, the initial-letter-align property should be separate because it sets an alignment policy for all of these effects across the document which is a general stylistic choice and a function of the script (e.g., Latin, Cyrillic, Arabic) used in the document.

4.2. Make appropriate choices for whether CSS properties are inherited

Decide whether a property should be inherited based on whether the effect of the property should be overridden or added to if set on an ancestor as well as a descendant.

If setting the property on a descendant element needs to override (rather than add to) the effect of setting it on an ancestor, then the property should probably be inherited.

If setting the property on a descendant element is a separate effect that adds to setting it on an ancestor, then the property should probably not be inherited.

A specification of an non-inherited property requiring that the handling of an element look at the value of that property on its ancestors (which may also be slow) is a "code smell" that suggests that the property likely should have been inherited. A specification of an inherited property requiring that the handling of an element ignore the value of a property if it’s the same as the value on the parent element is a "code smell" that suggests that the property likely should not have been inherited.

If a property has an effect on text, then it’s almost always true that a descendant element needs to override (rather than add to) the effect of setting it on an ancestor, and the property should be inherited. This is also needed to maintain the design principle that inserting an unstyled inline element around a piece of text doesn’t change the appearance of that text.

For example, the background-image property is not inherited.

If the background-image property had been inherited, then the specification would have had to create a good bit of complexity to avoid a partially-transparent image being visibly repeated for each descendant element. This complexity probably would have involved behaving differently if the property had the same value on the parent element, which is the "code smell" mentioned above that suggests that a property likely should not have been inherited.

Another example is the font-size property, which is inherited. It sets the size of the font used for the text in the element, and continues to apply to any descendants that don’t have a declaration setting font-size to a different value.

If the font-size property were not inherited, then it would probably have to have an initial value that requires walking up the ancestor chain to find the nearest ancestor that doesn’t have that value. This is the "code smell" mentioned above that suggests that a property likely should have been inherited.

4.3. Choose the computed value type based on how the property should inherit

Choose the computed value of a CSS property based on how it will inherit, including how values where it depends on other properties should inherit.

Inheritance means that an element gets the same computed value for a property that its parent has. This means that processing steps that happen before reaching the computed value affect the value that is inherited, and those that happen after (such as for the used value) do not.

For example, the line-height property may accept a <number> value, such as line-height: 1.4. This value represents a multiple of the font-size, so if the font-size is 20px, the actual value for the line height is 28px.

However, the computed value in this case is the <number> 1.4, not the <length> 28px. (The used value is 28px.)

The line-height property can be inherited into elements that have a different font-size, and any property on those elements which depends on line-height must take the relevant font-size into account, rather than the font-size for the element from which the line-height value was inherited.

<body style="font-size: 20px; line-height: 1.4">

  <p>This body text has a line height of 28px.</p>

  <h2 style="font-size: 200%">
    This heading has a line-height of 56px,
    not 28px, even though the line-height was declared on the body.
    This means that the 40px font won’t overflow the line height.

These number values are generally the preferred values to use for line-height because they inherit better than length values.

See also:

4.4. Naming of CSS properties and values

The names of CSS properties are usually nouns, and the names of their values are usually adjectives (although sometimes nouns).

Words in properties and values are separated by hyphens. Abbreviations are generally avoided.

Use the root form of words when possible rather than a form with a grammatical prefix or suffix (for example, "size" rather than "sizing").

The list of values of a property should generally be chosen so that new values can be added. Avoid values like yes, no, true, false, or things with more complex names that are basically equivalent to them.

Avoid words like "mode" or "state" in the names of properties, since properties are generally setting a mode or state.

See § 12 Naming principles for general (cross-language) advice on naming.

4.5. Content should be viewable and accessible by default

Design CSS properties or CSS layout systems (which are typically values of the display property), to preserve the content as viewable, accessible and usable by default.

For example, the default behavior of all layout systems in CSS will not lead to content being clipped, content overlapping other content, or content being unreachable by scrolling. These things should only happen if CSS features are used that are more explicitly choosing such a behavior (for example, overflow: hidden or left: -40em). They should not happen by default as a result of something like display: flex or position: relative.

5. JavaScript Language

5.1. Web APIs are for JavaScript

When designing imperative APIs for the Web, use JavaScript. In particular, you can freely rely upon language-specific semantics and conventions, with no need to keep things generalized.

For example, the CustomElementRegistry.define() method takes a reference to a Constructor Method.

This takes advantage of the relatively recent addition of classes to JavaScript, and the fact that method references are very easy to use in JavaScript.

JavaScript is standardized under the name [ECMASCRIPT].

[WEBIDL] defines a separate "ECMAScript binding" section, but this doesn’t imply that Web IDL is intended to have bindings in other programming languages.

5.2. Preserve run-to-completion semantics

Don’t modify data accessed via JavaScript APIs while a JavaScript event loop is running.

A JavaScript Web API is generally a wrapper around a feature implemented in a lower-level language, such as C++ or Rust. Unlike those languages, when using JavaScript developers can expect that once a piece of code begins executing, it will continue executing until it has completed.

Because of that, JavaScript authors take for granted that the data available to a function won’t change unexpectedly while the function is running.

So if a JavaScript Web API exposes some piece of data, such as an object property, the user agent must not update that data while a JavaScript task is running. Instead, if the underlying data changes, queue a task to modify the exposed version of the data.

If a JavaScript task has accessed the navigator.onLine property, and browser’s online status changes, the property won’t be updated until the next task runs.

5.3. Don’t expose garbage collection

Ensure your JavaScript Web APIs don’t provide a way for an author to know the timing of garbage collection.

The timing of garbage collection is different in different user agents, and may change over time as user agents work on improving performance. If an API exposes the timing of garbage collection, it can cause programs to behave differently in different contexts. This means that authors need to write extra code to handle these differences. It may also make it more difficult for user agents to implement different garbage collection strategies, if there is enough code which depends on timing working a particular way.

This means that you shouldn’t expose any API that acts as a weak reference, e.g. with a property that becomes null once garbage collection runs. Object and data lifetimes in JavaScript code should be predictable.

getElementsByTagName returns an HTMLCollection object, which may be re-used if the method is called twice on the same Document object, with the same tag name. In practice, this means that the same object will be returned if and only if it has not been garbage collected. This means that the behaviour is different depending on the timing of garbage collection.

If getElementsByTagName were designed today, the advice to the designers would be to either reliably reuse the output, or to produce a new HTMLCollection each time it’s invoked.

getElementsByTagName gives no sign that it may depend on the timing of garbage collection. In contrast, APIs which are explicitly designed to depend on garbage collection, like WeakRef or FinalizationRegistry, set accurate author expectations about the interaction with garbage collection.

6. JavaScript API Surface Concerns

6.1. Attributes should behave like data properties

[WEBIDL] attributes should act like simple JavaScript object properties.

In reality, IDL attributes are implemented as accessor properties with separate getter and setter methods. To make them act like JavaScript object properties:

  • Getters must not have any observable side effects.

  • Getters should not perform any complex operations.

  • Ensure that obj.attribute === obj.attribute is always true. Don’t create a new value each time the getter is called.

  • If possible, ensure that given obj.attribute = x, obj.attribute === x is true. (This may not be possible if some kind of conversion is necessary for x.)

If you were thinking about using an attribute, but it doesn’t behave this way, you should probably use a method instead.

For example, offsetTop performs layout, which can be complex and time-consuming. It would have been better if this had been a method like getBoundingClientRect().

6.2. Consider whether objects should be live or static

If an API gives access to an object representing some internal state, decide whether that object should continue to be updated as the state changes.

An object which represents the current state at all times is a live object, while an object which represents the state at the time it was created is a static object.

Live objects

If an object allows the author to change the internal state, that object should be live. For example, DOM Nodes are live objects, to allow the author to make changes to the document with an understanding of the current state.

Properties of live objects may be computed as they are accessed, instead of when the object is created. This makes live objects sometimes a better choice if the data needed is complex to compute, since there is no need to compute all the data before the object is returned.

A live object may also use less memory, since there is no need to copy data to a static version.

Static objects

If an object represents a list that might change, most often the object should be static. This is so that code iterating over the list doesn’t need to handle the possibility of the list changing in the middle.

getElementsByTagName returns a live object which represents a list, meaning that authors need to take care when iterating over its items:

let list = document.getElementsByTagName("td");

for (let i = 0; i < list.length; i++) {
    let td = list[i];
    let tr = document.createElement("tr");
    tr.innerHTML = td.outerHTML;

    // This has the side-effect of removing td from the list,
    // causing the iteration to become unpredictable.
    td.parentNode.replaceChild(tr, td);

The choice to have querySelectorAll() return static objects was made after spec authors noticed that getElementsByTagName was causing problems.

URLSearchParams isn’t static, even though it represents a list, because it’s the way for authors to change the query string of a URL.

Note: For maplike and setlike types, this advice may not apply, since these types were designed to behave well when they change while being iterated.

If it would not be possible to compute properties at the time they are accessed, a static object avoids having to keep the object updated until it’s garbage collected, even if it isn’t being used.

If a static object represents some state which may change frequently, it should be returned from a method, rather than available as an attribute.

See also:

6.3. Use attributes or methods appropriately

Sometimes it is unclear whether to use an attribute or a method.

  • Attribute getters must not have any (observable) side effects. If you have expected side effects, use a method.

  • Attribute getters should not perform any blocking operations. If a getter requires performing a blocking operation, it should be a method.

  • If the underlying object has not changed, attribute getters should return the same object each time it is called. This means obj.attribute === obj.attribute must always hold. Returning a new value from an attribute getter each time is not allowed. If this does not hold, the getter should be a method.

Note: An antipattern example of a blocking operation is with getters like offsetTop performing layout.

For attributes, whenever possible, preserve values given to the setter for return from the getter. That is, given obj.attribute = x, a subsequent obj.attribute === x should be true. (This will not always be the case, e.g., if a normalization or type conversion step is necessary, but should be held as a goal for normal code paths.)

The object you want to return may be live or static. This means:

  • If live, then return the same object each time, until a state change requires a different object to be returned. This can be returned from either an attribute or a method.

  • If static, then return a new object each time. In which case, this should be be a method.

6.4. Prefer dictionary arguments over primitive arguments

API methods should generally use dictionary arguments instead of a series of optional primitive arguments.

This makes the code that calls the method much more readable. It also makes the API more extensible in the future, particularly if multiple arguments with the same type are needed.

For example,

new Event("example",
          { bubbles: true,
            cancelable: false})

is much more readable than

new Event("example", true, false)


window.scrollBy({ left: 50, top: 0 })

is more readable than

window.scrollBy(50, 0)

The dictionary itself should be an optional argument, so that if the author is happy with all of the default options, they can avoid passing an extra argument.

For example,

element.scrollIntoView(false, {});

is equivalent to


See also:

6.5. Make method arguments optional if possible

If an argument for an API method has a reasonable default value, make that argument optional and specify the default value.

addEventListener() takes an optional boolean useCapture argument. Thie defaults to false, meaning that the event should be dispatched to the listener in the bubbling phase by default.

Note: Exceptions have been made for legacy interoperability reasons (such as XMLHttpRequest), but this should be considered a design mistake rather than recommended practice.

The API must be designed so that if an argument is left out, the default value used is the same as converting undefined to the type of the argument. For example, if a boolean argument isn’t set, it must default to false.

When deciding between different list data types for your API, unless otherwise required, use the following list types:

See also:

6.6. Naming optional arguments

Name optional arguments to make the default behavior obvious without being named negatively.

This applies whether they are provided in a dictionary or as single arguments.

addEventListener() takes an options object which includes an option named once. This indicates that the listener should not be invoked repeatedly.

This option could have been named repeat, but that would require the default to be true. Instead of naming it noRepeat, the API authors named it once, to reflect the default behaviour without using a negative.

Other examples:

  • passive rather than active, or

  • isolate rather than connect, or

  • private rather than public

See also:

6.7. Classes should have constructors when possible

Make sure that any class that’s part of your API has a constructor, if appropriate.

By default, [WEBIDL] interfaces generate "non-constructible" classes: trying to create instances of them using new X() will throw a TypeError. To make them constructible, you can add appropriate constructor operation to your interface, and defining the algorithm for creating new instances of your class.

This allows JavaScript developers to create instances of the class for purposes such as testing, mocking, or interfacing with third-party libraries which accept instances of that class. It also gives authors the ability to create a subclass of the class, which is otherwise prevented, because of the way JavaScript handles subclasses.

This won’t be appropriate in all cases. For example:

  • Some objects represent access to privileged resources, so they need to be constructed by factory methods which can access those resources.

  • Some objects have very carefully controlled lifecycles, so they need to be created and accessed through specific methods.

  • Some objects represent an abstract base class, which shouldn’t be constructed, and which authors should not be able to define subclasses for.

The Event class, and all its derived interfaces, are constructible. This is useful when testing code which handles events: an author can construct an Event to pass to a method which handles that type of event.

The Window class isn’t constructible, because creating a new window is a privileged operation with significant side effects. Instead, the method is used to create new windows.

The ImageBitmap class isn’t constructible, as it represents an immutable, ready-to-paint bitmap image, and the process of getting it ready to paint must be done asynchronously. Instead, the createImageBitmap() factory method is used to create it.

The DOMTokenList class is, sadly, not constructible. This prevents the creation of custom elements that expose their token list attributes as DOMTokenLists.

Several non-constructible classes, like Navigator, History, or Crypto, are non-constructible because they are singletons representing access to per-window information. In these cases, something like the Web IDL namespace feature might have been a better fit, but these features were designed before namespaces, and go beyond what is currently possible with namespaces.

If your API requires this type of singleton, consider using a namespace, and File an issue on Web IDL if there is some problem with using them.

Factory methods can complement constructors, but generally should not be used instead of them. It may still be valuable to include factory methods in addition to constructors, when they provide additional benefits. A common such case is when an API includes base classes and multiple specialized subclasses, with a factory method for creating the appropriate subclass based on the parameters passed. Often the factory method is a static method on the closest common base subclass of the returned result.

The createElement method is an example of a factory method that could not have been implemented as a constructor, as its result can be any of a number of subclasses of Element.

The initMouseEvent factory method only creates MouseEvent objects, which were originally not constructible, even though there was no technical reason against that. Eventually it was deprecated, and the MouseEvent object was simply made constructible.

6.8. Use synchronous when appropriate

Where possible, prefer synchronous APIs when designing a new API. Synchronous APIs are simpler to use, and need less infrastructure set-up (such as making functions async).

An API should generally be synchronous if the following rules of thumb apply:

  • The API is not expected to ever be gated behind a permission prompt, or another dialog such as a device selector.

  • The API implementation will not be blocked by a lock, filesystem or network access, for example, inter-process communication.

  • The execution time is short and deterministic.

6.9. Design asynchronous APIs using Promises

If an API method needs to be asynchronous, use Promises, not callback functions.

Using Promises consistently across the web platform means that APIs are easier to use together, such as by chaining promises. Promise-using code also tends to be easier to understand than code using callback functions.

An API might need to be asynchronous if:

  • the user agent needs to prompt the user for permission,

  • some information might need to be read from disk, or requested from the network,

  • the user agent may need to do a significant amount of work on another thread, or in another process, before returning the result.

See also:

6.10. Cancel asynchronous APIs/operations using AbortSignal

If an asynchronous method can be cancelled, allow authors to pass in an AbortSignal as part of an options dictionary.

const controller = new AbortController();
const signal = controller.signal;{ signal });

Using AbortSignal consistently as the way to cancel an asychronous operation means that authors can write less complex code.

For example, there’s a pattern of using a single AbortSignal for several ongoing operations, and then using the corresponding AbortController to cancel all of the operations at once if necessary (such as if the user presses "cancel", or a single-page app navigation occurs.)

Even if cancellation can’t be guaranteed, you can still use an AbortController, because a call to abort() on AbortController is a request, rather than a guarantee.

6.11. Use strings for constants and enums

If your API needs a constant, or a set of enumerated values, use string values.

Strings are easier for developers to inspect, and in JavaScript engines there is no performance benefit from using integers instead of strings.

If you need to express a state which is a combination of properties, which might be expressed as a bitmask in another language, use a dictionary object instead. This object can be passed around as easily as a single bitmask value.

6.12. Properties vs. Methods

Sometimes it is unclear whether to use a property or a method.

  • Property getters must not have any (observable) side effects. If you have expected side effects, use a method.

  • Property getters are expected to represent the state of a given object. This means a getter should be able to efficiently return a value using the existing state. If a getter does not satisfy this, it should be a method. (A notable failure of the platform in this regard is getters like offsetTop performing layout; do not repeat this mistake.)

  • If the underlying object has not changed, property getters should return the same object each time it is called. This means === must always hold. Returning a new value from a property getter each time is not allowed. If this does not hold, the getter should be a method.

For properties, whenever possible, preserve values given to the setter for return from the getter. That is, given = x, a subsequent === x should be true. (This will not always be the case, e.g., if a normalization or type conversion step is necessary, but should be held as a goal for normal code paths.)

The object you want to return may be live or static. This means:

  • If live, then return the same object each time, until a state change requires a different object to be returned. This can be returned from either an property or a method.

  • If static, then return a new object each time. In which case, this should be be a method.

7. Event Design

7.1. Use promises for one time events

Follow the advice in the Writing Promise-Using Specifications guideline.

7.2. Events should fire before Promises resolve

If a Promise-based asynchronous algorithm dispatches events, it should dispatch them before the Promise resolves, rather than after.

This guarantees that once the Promise resolves, all effects of the algorithm have been applied. For example, if an author changes some state in reaction to an event which the Promise dispatches, they can be sure that all of the state is consistent if the Promise is resolved.

7.3. Don’t invent your own event listener-like infrastructure

When creating an API which allows authors to start and stop a process which generates notifications, use the existing event infrastructure to allow listening for the notifications. Create separate API controls to start/stop the underlying process.

For example, the Web Bluetooth API provides a startNotifications() method on the BluetoothRemoteGATTCharacteristic global object, which adds the object to the "active notification context set".

When the User Agent receives a notification from the Bluetooth device, it fires an event at the BluetoothRemoteGATTCharacteristic objects in the active notification context set.


7.4. Always add event handler attributes

If your API adds a new event type, add a corresponding onyourevent event handler IDL attribute to the interface of any EventHandler which may handle the new event.

it’s important to continue to define event handler IDL attributes because:

For consistency, if the event needs to be handled by HTML and SVG elements, add the event handler IDL attributes on the GlobalEventHandlers interface mixin, instead of directly on the relevant element interface(s). Similarly, add event handler IDL attributes to WindowEventHandlers rather than Window.

7.5. Events are for notification

Events shouldn’t be used to trigger changes, only to deliver a notification that a change has already finished happening.

When a window is resized, an event named resize is fired at the Window object.

It’s not possible to stop the resize from happening by intercepting the event. Nor is it possible to fire a constructed resize event to cause the window to change size. The event can only notify the author that the resize has already happened.

7.6. Guard against potential recursion

If your API includes a long-running or complicated algorithm, prevent calling into the algorithm if it’s already running.

If an API method causes a long-running algorithm to begin, you should use events to notify user code of the progress of the algorithm. However, the user code which handles the event may call the same API method, causing the complex algorithm to run recursively. The same event may be fired again, causing the same event handler to be fired, and so on.

To prevent this, make sure that any "recursive" call into the API method simply returns immediately. This technique is "guarding" the algorithm.

AbortSignal's add, remove and signal abort each begin with a check of the signal’s aborted flag. If the flag is set, the rest of the algorithm doesn’t run.

In this case, a lot of the important complexity is in the algorithms run during the signal abort steps. These steps iterate through a collection of algorithms which are managed by the add and remove methods.

For example, the ReadableStreamPipeTo definition adds an algorithm into the AbortSignal's set of algorithms to be run when the signal abort steps are triggered, by calling abort() on the AbortController associated with the signal.

This algorithm is likely to resolve promises causing code to run, which may include attempting to call any of the methods on AbortSignal. Since signal abort involves iterating through the collection of algorithms, it should not be possible to modify that collection while it’s running.

And since signal abort would have triggered the code which caused the recursive call back in to signal abort, it’s important to avoid running these steps again if the signal is already in the process of the signal abort steps, to avoid recursion.

Note: A caution about early termination: if the algorithm being terminated would go on to ensure some critical state consistency, be sure to also make the relevant adjustments in state before early termination of the algorithm. Not doing so can lead to inconsistent state and end-user-visible bugs when implemented as-specified.

Note: Be cautious about throwing exceptions in early termination. Keep in mind the scenario in which developers will be invoking the algorithm, and whether they would reasonably expect to handle an exception in this [perhaps rare] case. For example, will this be the only exception in the algorithm?

You won’t always be able to "guard" in this way. For example, an algorithm may have too many entry-points to reliably check all of them. If that’s the case, another option is to defer calling the author code to a later task or microtask. This avoids a stack of recursion, but can’t avoid the risk of an endless loop of follow-up tasks.

Deferring an event is often specified as "queue a task to fire an event...".

You should always defer events if the algorithm that triggers the event could be running on a different thread or process. In this case, deferral ensures the events can be processed on the correct task in the task queue.

Both the "guarding" and the "deferring" approach have trade-offs.

"Guarding" an algorithm guarantees:

  • at the time events are fired, there is no chance that the state may have changed between the guarded algorithm ending and the event firing.

  • events fired during the algorithm, such as events to notify user code of a state change made as part of the algorithm, can be fired immediately, notifying code of the change without needing to wait for the next task.

  • user code running in the event handler can observe relevant state directly on the instance object they were fired on, rather than needing to be given a copy of the relevant state with the event.

If the events are deferred instead:

  • there is no guarantee that they will be first in the task queue once the algorithm completes.

  • any other task may change the object’s state you should include any state relevant to the event with the deferred event.

    • This usually involves a new subclass of Event, with new attributes to hold the state.

      For example, the ProgressEvent adds loaded, total, etc. attributes to hold the state.

  • if different parts of an algorithm need to coordinate, you may need to define an explicit state machine (well-defined state transitions) to ensure that when a deferred event fires, the behavior of inspecting or changing state is well-defined.

    For example, in [payment-request], the PaymentRequest's [[state]] internal slot explicitly tracks the object’s state through its well-defined transitions.

    • These state transitions often use the guarding technique themselves, to ensure the state transitions happen appropriately.

      For example, in [payment-request] note the guards used around the [[state]] internal slot, such as in the show() algorithm.

  • if the deferred event doesn’t need extra state, or a state machine, this probably means that the event is just signalling the completion of the algorithm. If this is true, the API should probably return a Promise instead of firint the event. See § 7.1 Use promises for one time events.

Note: events that expose the possibility of recursion as described in this section were sometimes called "synchronous events". This terminology is discouraged as it implies that it’s possible to dispatch an event asynchronously. All events are dispatched synchronously. What is more often implied by "asynchronous event" is to defer firing an event.

7.7. State and Event subclasses

Where possible, use a plain Event with a specified type, and capture any state information in the target object.

It’s usually not necessary to create new subclasses of Event.

7.8. How to decide between Events and Observers

In general, use EventTarget and notification Events, rather than an Observer pattern, unless an EventTarget can’t work well for your feature.

Using an EventTarget ensures your feature benefits from improvements to the shared base class, such as the addition of the once.

If using events causes problems, such as unavoidable recursion, consider using an Observer pattern instead.

MutationObserver, Intersection Observer, Resize Observers, and IndexedDB Observers are all examples of an Observer pattern.

MutationObserver replaced the deprecated DOM Mutation Events after developers noticed that DOM Mutation Events

  • fire too often

  • don’t benefit from event propagation, which makes them too slow to be useful

  • cause recursion which is too difficult to guard against.

Mutation Observers:

  • can batch up mutations to be sent to observers after mutations have finished being applied;

  • don’t need to go through event capture and bubbling phases;

  • provide a richer API for expressing what mutations have occurred.

Note: Events can also batch up notifications, but DOM Mutation Events were not designed to do this. Events don’t always need to participate in event propagation, but events on DOM Nodes usually do.

The Observer pattern works like this:

  • Each instance of the Observer class is constructed with a callback, and optionally with some options to customize what should be observed.

  • Instances begin observing specific targets, using a method named observe(), which takes a reference to the target to be observed. The options to customize what should be observed may be provided here instead of to the constructor. The callback provided in the constructor is invoked when something interesting happens to those targets.

  • Callbacks receive change records as arguments. These records contain the details about the interesting thing that happened. Multiple records can be delivered at once.

  • The author may stop observing by calling a method called unobserve() or disconnect() on the Observer instance.

  • Optionally, a method may be provided to immediately return records for all observed-but-not-yet-delivered occurrences.

IntersectionObserver may be used like this:

function checkElementStillVisible(element, observer) {
    delete element.visibleTimeout;

    // Process any observations which may still be on the task queue

    if ('isVisible' in element) {
        delete element.isVisible;

        // Stop observing this element

function processChanges(changes) {
    changes.forEach(function(changeRecord) {
        var element =;
        element.isVisible = isVisible(changeRecord.boundingClientRect,
        if ('isVisible' in element) {
            // Element became visible
            element.visibleTimeout = setTimeout(() => {
                checkElementStillVisible(element, observer);
            }, 1000);
        } else {
            // Element became hidden
            if ('visibleTimeout' in element) {
                delete element.visibleTimeout;

// Create IntersectionObserver with callback and options
var observer = new IntersectionObserver(processChanges,
                                        { threshold: [0.5] });

// Begin observing "ad" element
var ad = document.querySelector('#ad');

(Example code adapted from the IntersectionObserver explainer.)

To use the Observer pattern, you need to define:

  1. the new Observer object type,

  2. an object type for observation options, and

  3. an object type for the records to be observed.

The trade-off for this extra work is the following advantages:

  • Instances can be customized at observation time, or at creation time. The constructor for an Observer, or its observe() method, can take options allowing authors to customize what is observed for each callback. This isn’t possible with addEventListener().

  • It’s easy to stop listening on multiple callbacks using the disconnect() or unobserve() method on the Observer object.

  • You have the option to provide a method like takeRecords(), which immediately fetches the relevant data, instead of waiting for an event to fire.

  • Because Observers are single-purpose, you don’t need to specify an event type.

Observers and EventTargets have these things in common:

  • Both can be customized at creation time.

  • Both can batch occurrences and deliver them at any time. EventTargets don’t need to be synchronous; they can use microtask timing, idle timing, animation-frame timing, etc. You don’t need an Observer to get special timing or batching.

  • Neither EventTargets nor Observers need to participate in a DOM tree (bubbling/capture and cancellation). Most prominent EventTargets are Nodes in the DOM tree, but many other events are standalone; for example, IDBDatabase and XMLHttpRequestEventTarget. Even when using Nodes, your events may be designed to be non-bubbling and non-cancelable.

Here is an example of using a hypothetical version of IntersectionObserver that’s an EventTarget subclass:

const io = new ETIntersectionObserver(element, { root, rootMargin, threshold });

function listener(e) {
    for (const change of e.changes) {
        // ...

io.addEventListener("intersect", listener);
io.removeEventListener("intersect", listener);

Compared to the Observer version:

  • it’s more difficult to observe multiple elements with the same options;

  • there is no way to request data immediately;

  • it’s more work to remove multiple event listeners for the same event;

  • the author has to provide a redundant "intersect" event type.

In common with the Observer version:

  • it can still do batching;

  • it has the same timing (based on the JavaScript event queue);

  • authors can still customize what to listen for; and

  • events don’t go through capture or bubbling.

These aspects can be achieved with either design.

See also:

8. Web IDL, Types, and Units

8.1. Use numeric types appropriately

If an API you’re designing uses numbers, use one of the following [WEBIDL] numeric types, unless there is a specific reason not to:

unrestricted double

Any JavaScript number, including infinities and NaN


Any JavaScript number, excluding infinities and NaN

[EnforceRange] long long

Any JavaScript number from -263 to 263, rounded to the nearest integer. If a number outside this range is given, the generated bindings will throw a TypeError.

[EnforceRange] unsigned long long

Any JavaScript number from 0 to 264, rounded to the nearest integer. If a number outside this range is given, the generated bindings will throw a TypeError.

JavaScript has only one numeric type, Number: IEEE 754 double-precision floating point, including ±0, ±Infinity, and NaN. [WEBIDL] numeric types represent rules for modifying any JavaScript number to belong to a subset with particular properties. These rules are run when a number is passed to the interface defined in IDL, whether a method or a property setter.

If you have extra rules which need to be applied to the number, you can specify those in your algorithm.

The WEBIDL rules for converting a JavaScript number to a number with fewer bits, such as an octet (8 bits, in the range [0, 255]), involves taking the modulo of the JavaScript number. For example, to convert a JavaScript number value of 300 to an octet, the bindings will first compute 300 modulo 255, so the resulting number will be 45, which might be surprising.

Instead, you can use [EnforceRange] octet to throw a TypeError for values outside of the octet range, or [Clamp] octet to clamp values to the octet range (for example, converting 300 to 255).

This also works for the other shorter types, such as short or long.

bigint should be used only when values greater than 253 or less than -253 are expected.

An API should not support both BigInt and Number simultaneously, either by supporting both types via polymorphism, or by adding separate, otherwise identical APIs which take BigInt and Number. This risks losing precision through implicit conversions, which defeats the purpose of BigInt.

8.2. Represent strings appropriately

When designing a web platform feature which operates on strings, use DOMString unless you have a specific reason not to.

Most string operations don’t need to interpret the code units inside of the string, so DOMString is the best choice. In the specific cases explained below, it might be appropriate to use either USVString or ByteString instead. [INFRA] [WEBIDL]

USVString is the Web IDL type that represents scalar value strings. For strings whose most common algorithms operate on scalar values (such as percent-encoding), or for operations which can’t handle surrogates in input (such as APIs that pass strings through to native platform APIs), USVString should be used.

Reflecting IDL attributes whose content attribute is defined to contain a URL (such as href) should use USVString. [HTML]

ByteString should only be used for representing data from protocols like HTTP which don’t distinguish between bytes and strings. It isn’t a general-purpose string type. If you need to represent a sequence of bytes, use Uint8Array.

8.3. Use milliseconds for time measurement

If you are designing an API that accepts a time measurement, express the time measurement in milliseconds.

Even if seconds (or some other time unit) are more natural in the domain of an API, sticking with milliseconds ensures that APIs are interoperable with one another. This means that authors don’t need to convert values used in one API to be used in another API, or keep track of which time unit is needed where.

This convention began with setTimeout() and the Date API, and has been used since then.

Note: high-resolution time is usually represented as fractional milliseconds using a floating point value, not as an integer value of a smaller time unit like nanoseconds.

8.4. Use the appropriate type to represent times and dates

When representing date-times on the platform, use the DOMHighResTimeStamp type. DOMHighResTimeStamp allows comparison of timestamps, regardless of the user’s time settings.

DOMHighResTimeStamp values represent a time value in milliseconds. See [HIGHRES-TIME] for more details.

Don’t use the JavaScript Date class for representing specific date-time values. Date objects are mutable (may have their value changed), and there is no way to make them immutable.

8.5. Use Error or DOMException for errors

Represent errors in web APIs as ECMAScript error objects (e.g., Error) or as DOMException. whether they are exceptions, promise rejection values, or properties.

9. OS and Device Wrapper APIs

New APIs are now being developed in the web platform for interacting with devices. For example, authors wish to be able to use the web to connect with their microphones and cameras, generic sensors (such as gyroscope and accelerometer), Bluetooth and USB-connected peripherals, automobiles, etc.

These can be functionality provided by the underlying operating system, or provided by a native third-party library to interact with a device. These are an abstraction which "wrap" the native functionality without introducing significant complexity, while securing the API surface to the browser. So, these are called wrapper APIs.

This section contains principles for consideration when designing APIs for devices.

9.1. Use care when exposing identifying information about devices

If you need to give web sites access to information about a device, use the guidelines below to decide what information to expose.

Firstly, think carefully about whether it is really necessary to expose identifying information about the device at all. Consider whether your user needs could be satisfied by a less powerful API.

Exposing the presence of a device, additional information about a device, or device identifiers, each increase the risk of harming the user’s privacy.

One risk is that as more specific information is shared, the set of fingerprinting data available to sites gets larger. There are also other potential risks to user privacy.

Privacy Threat Model is not ready for prime time.

If there is no way to design a less powerful API, use these guidelines when exposing device information:

Limit information in the id

Include as little identifiable information as possible in device ids exposed to the web plaform. Identifiable information includes branding, make and model numbers, etc You can usually use a random number or a unique id instead. Make sure that your ids aren’t guessable, and aren’t re-used.

Keep the user in control

When the user chooses to clear browsing data, make sure any stored device ids are cleared.

Hide sensitive ids behind a user permission

If you can’t create a device id in an anonymous way, limit access to it. Make sure the user can provide meaningful consent to a Web page accessing this information.

Tie ids to the same-origin model

Create distinct unique ids for the same physical device for each origin that has has access to it.

If the same device is requested more than once by the same origin, return the same id for it (unless the user has cleared their browsing data). This allows authors to avoid having several copies of the same device.

Persistable when necessary

If a device id is time consuming to obtain, make sure authors can store an id generated in one session for use in a later session. You can do this by making sure that the procedure to generate the id consistently produces the same id for the same device, for each origin.

See also:

9.2. Use care when exposing APIs for selecting or enumerating devices

Look for ways to avoid enumerating devices. If you can’t avoid it, expose the least information possible.

If an API exposes the the existence, capabilities, or identifiers of more than one device, all of the risks in § 9.1 Use care when exposing identifying information about devices are multiplied by the number of devices. For the same reasons, consider whether your user needs could be satisfied by a less powerful API. [LEAST-POWER]

If the purpose of the API is to enable the user to select a device from the set of available devices of a particular kind, you may not need to expose a list to script at all. An API which invokes a User-Agent-provided device picker could suffice. Such an API:

  • keeps the user in control,

  • doesn’t expose any device information without the user’s consent,

  • doesn’t expose any fingerprinting data about the user’s environment by default, and

  • only exposes information about one device at a time.

When designing API which allows users to select a device, it may be necessary to also expose the fact that there are devices are available to be picked. This does expose one bit of fingerprinting data about the user’s environment to websites, so it isn’t quite as safe as an API which doesn’t have such a feature.

The RemotePlayback interface doesn’t expose a list of available remote playback devices. Instead, it allows the user to choose one device from a device picker provided by the User Agent.

It does enable websites to detect whether or not any remote playback device is available, so the website can show or hide a control the user can use to show the device picker.

The trade-off is that by allowing websites this extra bit of information, the API lets authors make their user interface less confusing. They can choose to show a button to trigger the picker only if at least one device is available.

If you must expose a list of devices, try to expose the smallest subset that satisfies your user needs.

For example, an API which allows the website to request a filtered or constrained list of devices is one option to keep the number of devices smaller. However, if authors are allowed to make multiple requests with different constraints, they may still be able to access the full list.

Finally, if you must expose the full list of devices of a particular kind, please rigorously define the order in which devices will be listed. This can reduce interoperability issues, and helps to mitigate fingerprinting. (Sort order could reveal other information: see Fingerprinting Guidance § 5.2 Standardization for more.)

Note: While APIs should not expose a full list of devices in an implementation-defined order, they may need to for web compatibility reasons.

9.3. Design based on user needs, not the underlying API or hardware

Design new native capabilities being brought to the web based on user needs.

Avoid directly translating an existing native API to the web.

Instead, consider the functionality available from the native API, and the user needs it addresses, and design an API that meets those user needs, even if the implementation depends on the existing native API.

Be particularly careful about exposing the exact lifecycle and data structures of the underlying native APIs. When possible, consider flexibility for new hardware.

This means newly proposed APIs should be designed with careful consideration on how they are intended to be used rather than how the underlying hardware, device, or native API available today.

9.4. Be proactive about safety

When bringing native capabilities to the web platform, try to design defensively.

Bringing a native capability to the web platform comes with many implications. Users may not want websites to know that their computers have specific capabilities. Therefore, access to anything outside of the logical origin boundary should be permission gated.

For example, if a device can store state, and that state is readable at the same time by multiple origins, a set of APIs that lets you read and write that state is effectively a side-channel that undermines the origin model of the web.

For these reasons, even if the device allows non-exclusive access, you may want to consider enforcing exclusive access per-origin, or even restricting it further to only the current active tab.

Additionally, APIs should be designed so that the applications can gracefully handle physical disruption, such as a device being unplugged.

9.5. Native APIs don’t typically translate well to the web

When adapting native operating system APIs for the web, make sure the new web APIs are designed with web platform principles in mind.

Make sure the web API can be implemented on more than one platform

When designing a wrapper API, consider how different platforms provide its functionality.

Ideally, all implementations should work exactly the same, but in some cases you may have a reason to expose options which only work on some platforms. If this happens, be sure to explain how authors should write code which works on all platforms. See § 2.5 New features should be detectable.

Underlying protocols should be open

APIs which require exchange with external hardware or services should not depend on closed or proprietary protocols. Depending on non-open protocols undermines the open nature of the web.

Design APIs to handle the user being off-line

If an API depends on some service which is provided by a remote server, make sure that the API functions well when the user can’t access the remote server for any reason.

Avoid additional fingerprinting surfaces

Wrapper APIs can unintentionally expose the user to a wider fingerprinting surface. Please read the TAG’s finding on unsanctioned tracking for additional details.

10. Other API Design Considerations

10.1. Polyfills

Polyfills can be hugely beneficial in helping to roll out new features to the web platform. The Technical Architecture Group finding on Polyfills and the Evolution of the Web offers guidance that should be considered in the development of new features, notably:

10.2. Where possible APIs should be made available in DedicatedWorker

When exposing a feature, please consider whether it makes sense to expose the feature to DedicatedWorker as well.

Many features could work out of the box on a DedicatedWorker and not enabling the feature there could limit the ability for users to run their code in a non blocking manner.

Certain challenges can exist when trying to enable a feature on DedicatedWorker, especially if the feature requires user input by asking for permission, or showing a picker or selector. Even though this might discourage spec authors to support workers, we still recommend designing the feature with DedicatedWorker support in mind, in order to not add assumptions that will later make it unnecessarily hard to expose these APIs to DedicatedWorker.

10.3. New Data Formats

Always define a corresponding MIME type and extend existing APIs to support this type for any new data format.

There are cases when a new capability on the web involves adding a new data format. This can be an image, video, audio, or any other type of data that a browser is expected to ingest. New formats should have a standardized MIME type, which is strictly validated.

While legacy media formats do not always have strict enforcement for MIME types (and sometimes rely on peeking at headers, to workaround this), this is mostly for legacy compatibility reasons and should not be expected or implemented for new formats.

It is expected that spec authors also integrate the new format to existing APIs, so that they are safelisted in both ingress (e.g. decoding from a ReadableStream) and egress (e.g. encoding to a WriteableStream) points from a browser’s perspective.

For example. if you are to add an image format to the web platform, first add a new MIME type for the format. After this, you would naturally add a decoder (and presumably an encoder) for said image format to support decoding in HTMLImageElements. On top of this, you are also expected to add support to egress points such as HTMLCanvasElement.toBlob() and HTMLCanvasElement.toDataURL().

For legacy reasons browsers support MIME type sniffing, but we do not recommend extending the pattern matching algorithm, due to security implications, and instead recommend enforcing strict MIME types for newer formats.

New MIME types should have a specification and should be registered with the Internet Assigned Numbers Authority (IANA).

If you are defining a new HTTP header, its syntax mustn’t go against the HTTP specification.

If the new header must convey structured data, such as lists, dictionaries, or typed values like decimals, strings, or booleans, then the header should use the syntax defined in Structured Field Values for HTTP. This avoids consumers of the header having to write and maintain specific micro-parsers, or even worse, something that would break those existing parsers. If the new header requires data that can’t be represented by Structured Field Values, then either engage with IETF about extending the Structured Field Values syntax, or re-consider if an HTTP header is a right place to expose the data before inventing a new syntax. [RFC8941]

10.5. Extend existing manifest files rather than creating new ones

If your feature requires a manifest, investigate whether you can extend an existing manifest schema.

New web features should be self-contained and self-describing and ideally should not require an additional manifest file. Some of the existing manifest files include

We encourage people to extend existing manifest files. Always try to get the changes into the original spec, or at least discuss the extension with the spec editors. Having this discussion is more likely to result in a better design and lead to something that better integrates with the platform.

When designing new keys and values for a manifest, make sure they are needed (that is, they enable well-thought-out use-cases). Also, please check if a similar key exists. If an existing key/value pair does more or less what is needed, work with the existing spec to extend it to your use-case if possible.

There are certain times the original spec authors might not want to integrate changes to their manifest format immediately. This may be due to process (like going to CR), or due to the addition having a different scope, like extensions to Web App Manifest only affecting store or payment use-cases. In that case, it is acceptable to monkey-patch as long as that is agreed with the original spec editors.

when we write up a principle on monkey patching, be sure to take this nuance into account. [Issue #184]

An example of something that was done as a monkey patch that is scheduled to be integrated into the web app manifest in a future level (post-CR):


However, if your feature requires a complex set of metadata specific to a functional domain, the creation of a new manifest may be justified.

You may need to make a new manifest file if the domain of the manifest file is different from the existing manifest files. For example, if the fetch timing is different, or if the complexity of the manifest warrants it. Application metadata should be added to the Web App Manifest or be an extension of it. Manifests designated to be used for specific applications or which require interoperability with non-browsers may need to take a different approach. Payment Method Manifest, Publication Manifest, and Origin Policy are examples of these cases.

For example, if you have a single piece of metadata, even if the fetch timing is different than an existing manifest, it is probably best to use an existing manifest (or ideally design the feature in such a way that a manifest is not required). However, if your feature requires a complex set of metadata specific to a functional domain, the creation of a new manifest may be justified.

Note that in all cases, the naming conventions should be harmonized (see § 12 Naming principles).

Note: By principle, existing manifests use lowercase, underscore-delimited names. There have been times where it was useful to re-use dictionaries from a manifest in DOM APIs as well, which meant converting the names to camel-cased version. One such example is the image resource. For this reason, if a key can clearly be expressed as a single word, that is recommended.

10.6. Features should be developer-friendly

Any new feature should be developer-friendly. While it is hard to quantify friendliness, at least consider the following points.

While error text in exceptions should be generic, developer-oriented error messages (such as those from a developer console) must be meaningful. When a developer encounters an error, the message should be specific to that error case, and not overly generic.

Ideally, developer-oriented error messages should have enough information to guide the developer in pinpointing where the problem is.

Declarative features such as CSS, may require extra work in the implementation for debuggability. Defining this in the specification not only makes the feature more developer-friendly, it also ensures a consistent development experience for the users.

A good example where debuggability was defined as part of the specification is Web Animations.

10.7. Use the best crypto, and expect it to evolve

Use only cryptographic algorithms that have been impartially reviewed by security experts, and make sure your choice of algorithm is proven, and up-to-date. Not only do they become obsolete or insecure, cryptographic protocols and algorithms also evolve quickly.

11. Writing good specifications

This document mostly covers API design for the Web, but those who design APIs are hopefully also writing specifications for the APIs that they design.

11.1. Identify the audience of each requirement in your specification

Document both how authors should write good code using your API, and how implementers of your API should handle poorly-written code.

The web, especially in comparison to other platforms, is designed to be robust in accepting poorly-formed markup. This means that web pages which use older versions of web standards can still be viewed in newer user agents, and also that authors have a shallower learning curve.

To support this, web specification writers need to describe how to interpret poorly-formed markup, as well as well-formed markup.

Implementers need to be able to understand the "supported language", which is more complex than the "conforming language" which authors should be aiming to use.

For example, the Processing model for the <table> element explains how to process the contents of a <table> element, including cases where the contents do not conform to the Content model.

11.2. Specify completely and avoid ambiguity

When specifying how a feature should work, make sure that there is enough information so that authors don’t have to write different code to work with different implementations.

If a specification isn’t specific enough, implementers might make different choices which force authors to write extra code to handle the differences.

Implementers shouldn’t need to check details of other implementations to avoid this situation. Instead, the specification should be complete and clear enough on its own.

Note: This doesn’t mean that implementations can’t render things differently, or show different user interfaces for things like permission prompts.

Note: Implementers should file bugs against specifications which don’t give them clear enough information to write the implementation.

11.2.1. Defining algorithms in specifications

Write algorithms in a way that is clear and concise.

The most common way to write algorithms is to write an explicit sequence of steps. This often looks like pseudo-code.

The showModal() method is described as a numbered sequence of steps which clearly explains when to throw exceptions and when to run algorithms defined in other parts of the HTML spec.

When writing a sequence of steps, imagine that it is a piece of functional code.

  • Clearly specify the inputs and outputs, name the algorithm and the variables it uses well, and explicitly note the points in the algorithm where the algorithm may return a result or error.

  • As much as possible, avoid writing algorithms which have side effects.

Summarize the purpose of the algorithm before going into detail, so that readers can decide whether to read the steps or skip over them. For example take the following steps, which ensure that there is at most one pending X callback per top-level browsing context.

A plain sequence of steps is not always the best way to write an algorithm. For example, it might make sense to define or re-use a formal syntax or grammar to avoid repetition, or define specific states to be used in a state machine. When using extra constructs like these, the earlier advice still applies.

As much as possible, describe algorithms as closely as possible to how they would be implemented. This may make the spec harder to write, but it means that implementations don’t need to figure out how to translate what’s written in the specification to how it should actually be implemented. In particular, that may mean that different implementations make different decisions that may lead to later features being feasible in one implementation but not another.

CSS selectors are read and understood from left to right, but in practice are matched from right to left in implementations. This allows the most specific term to be matched or not matched quickly, avoiding unnecessary work. The CSS selector matching algorithm is written this way, instead of a hypothetical algorithm which would more closely match how CSS selectors are often read by CSS authors.

See also:

11.2.2. Use explicit flags for state

Instead of describing state with words, use explicit flags for state when writing algorithms.

Using explicit flags makes it clear whether or not the state changes in different error conditions, and makes it clear when the state described by the flags is reset.

12. Naming principles

Names take meaning from:

  • signposting (the name itself)

  • use (how people come to understand the name over time)

  • context (the object on the left-hand side, for example)

12.1. Use common words

API naming must be done in easily readable US English. Keep in mind that most web developers aren’t native English speakers. Whenever possible, names should be chosen that use common vocabulary a majority of English speakers are likely to understand when first encountering the name.

For example setSize is a more English-readable name than cardinality.

Value readability over brevity. Keep in mind, however, that sometimes the shorter name is the clearer one. For instance, it may be appropriate to use technical language or well-known terms of art in the specification where the API is defined.

For example, the Fetch API’s Body mixin’s json() method is named for the kind of object it returns. JSON is a well-known term of art among web developers likely to use the Fetch API. It would harm comprehension to name this API less directly connected to its return type. [FETCH]

12.2. Use ASCII names

Names must adhere to the local language restrictions, for example CSS ident rules etc. and should be in the ASCII range.

12.3. Consultation

Please consult widely on names in your APIs.

You may find good names or inspiration in surprising places.

  • What are similar APIs named on other platforms, or in popular libraries in various programming languages?

  • Ask end users and developers what they call things that your API works with or manipulates.

  • Look at other web platform specifications, and seek advice from others working in related areas of the platform.

  • Also, consult if the names used are inclusive.

Pay particular attention to advice you receive with clearly-stated rationale based on underlying principles.

Tantek Çelik extensively researched how to name the various pieces of a URL. The editors of the URL spec have relied on this research when editing that document. [URL]

Use Web consistent names

When choosing a name for feature or API that has exposure in other technology stacks, the preference should be towards the Web ecosystem naming convention rather than other communities.

The NFC standard uses the term media to refer to what the Web calls MIME type. In such cases, the naming of features or API for the purposes of Web NFC must prefer naming consistent with MIME type.

Use Inclusive Language

Use inclusive language whenever possible.

For example, you should use blocklist and allowlist instead of blacklist and whitelist, and source and replica instead of master and slave.

If you need to refer to a generic persona, such as an author or user, use the generic pronoun "they", "their", etc. For example, "A user may wish to adjust their preferences".

12.4. Future-proofing

Naming should be generic and future-proof whenever possible.

The name should not be directly associated with a brand or specific revision of the underlying technology whenever possible; technology becomes obsolete, and removing APIs from the web is difficult.

The Remote Playback API was not named after one of the pre-existing, proprietary systems it was inspired by (such as Chromecast or AirPlay). Instead, general terms that describe what the API does were chosen. [REMOTE-PLAYBACK]

The keydown and keyup KeyboardEvents were not named for the specific hardware bus that keyboards used at the time. Instead, generic names were chosen that are as applicable to today’s Bluetooth and USB keyboards as they were to PS/2 and ADB keyboards back then. [UIEVENTS]

12.5. Consistency

Naming schemes should aim for consistency, to avoid confusion.

Sets of related names should agree with each other in:

  • part of speech - noun, verb, etc.

  • negation, for example all of the names in a set should either describe what is allowed or they should all describe what is denied

Boolean properties vs. boolean-returning methods

Boolean properties, options, or API arguments which are asking a question about their argument should not be prefixed with is, while methods that serve the same purpose, given that it has no side effects, should be prefixed with is to be consistent with the rest of the platform.

Use casing rules consistent with existing APIs

Although they haven’t always been uniformly followed, through the history of web platform API design, the following rules have emerged:

Casing rule Examples
Methods and properties
(Web IDL attributes, operations, and dictionary keys)
Camel case createAttribute()
Classes and mixins
(Web IDL interfaces)
Pascal case NamedNodeMap
Initialisms in APIs All caps, except when the first word in a method or property HTMLCollection
Repeated initialisms in APIs Follow the same rule HTMLHRElement
The abbreviation of "identity" Id, except when the first word in a method or property getElementById()
Enumeration values Lowercase, dash-delimited "no-referrer-when-downgrade"
Events Lowercase, concatenated canplaythrough
HTML elements and attributes Lowercase, concatenated figcaption
JSON keys Lowercase, underscore-delimited short_name

Note that in particular, when a HTML attribute is reflected as a property, the attribute and property’s casings won’t necessarily match. For example, the HTML attribute ismap on img elements is reflected as the isMap property on HTMLImageElement.

The rules for JSON keys are meant to apply to specific JSON file formats sent over HTTP or stored on disk, and don’t apply to the general notion of JavaScript object keys.

Repeated initialisms are particularly non-uniform throughout the platform. Infamous historical examples that violate the above rules are XMLHttpRequest and HTMLHtmlElement. Don’t follow their example; instead always capitalize your initialisms, even if they are repeated.

12.6. Warning about dangerous features

Where possible, mark features that weaken the guarantees provided to developers by making their names start with "unsafe" so that this is more noticeable.

For example, Content Security Policy (CSP) provides protection against certain types of content injection vulnerabilities. CSP also provides features that weaken this guarantee, such as the unsafe-inline keyword, which reduces CSP’s own protections by allowing inline scripts.

12.7. Other resources

Some useful advice on how to write specifications is available elsewhere:


This document consists of principles which have been collected by TAG members past and present during TAG design reviews. We are indebted to everyone who has requested a design review from us.

The TAG would like to thank Adrian Hope-Bailie, Alan Stearns, Aleksandar Totic, Alex Russell, Andreas Stöckel, Andrew Betts, Anne van Kesteren, Benjamin C. Wiley Sittler, Boris Zbarsky, Brian Kardell, Charles McCathieNevile, Chris Wilson, Dan Connolly, Daniel Ehrenberg, Daniel Murphy, Domenic Denicola, Eiji Kitamura, Eric Shepherd, Ethan Resnick, fantasai, François Daoust, Henri Sivonen, HE Shi-Jun, Ian Hickson, Irene Knapp, Jake Archibald, Jeffrey Yasskin, Jeremy Roman, Jirka Kosek, Kevin Marks, Lachlan Hunt, Léonie Watson, L. Le Meur, Lukasz Olejnik, Maciej Stachowiak, Marcos Cáceres, Mark Nottingham, Martin Thomson, Matt Giuca, Matt Wolenetz, Michael[tm] Smith, Mike West, Nick Doty, Nigel Megitt, Nik Thierry, Ojan Vafai, Olli Pettay, Pete Snyder, Philip Jägenstedt, Philip Taylor, Reilly Grant, Richard Ishida, Rick Byers, Ryan Sleevi, Sergey Konstantinov, Stefan Zager, Stephen Stewart, Steven Faulkner, Surma, Tab Atkins-Bittner, Tantek Çelik, Tobie Langel, Travis Leithead, and Yoav Weiss for their contributions to this & the HTML Design Principles document which preceded it.

Special thanks to Anne van Kesteren and Maciej Stachowiak, who edited the HTML Design Principles document.

If you contributed to this document but your name is not listed above, please let the editors know so they can correct this omission.