2

How to create warning assertions in Playwright TypeScript tests for non-critical failures?

I'm working on Playwright tests using TypeScript, and I need a way to handle assertions that throw warnings instead of failing the tests outright. Here's the scenario:

  1. Assertion Context: I want to perform assertions on CSS, DOM attributes(and of course other flaky ones), and other non-critical elements in my tests.

  2. Current Issue: When using traditional hard or soft assertions, failures cause the tests to fail immediately (hard) or after completion (soft).

  3. Desired Behavior: Instead of failing the tests, I'd like these assertions to throw warnings. This means that if the assertion fails, the test should ideally continue, and the test result should be marked as green/yellow or with a warning indicator, not red.

  4. Technology Stack: I'm using Playwright with TypeScript for my test automation.

  5. Example Use Case: An example of what I'm looking for is when checking for a CSS property or a DOM attribute value; if it doesn't match the expected value, I want to log a warning but not fail the test.

I've looked into soft assertions and custom error handling but haven't found a straightforward solution yet.

It's crucial because I have 1500 tests running on a CI/CD pipeline. When a test fails due to flaky CSS or attribute values, it creates chaos. Rather than deleting these tests, I want to keep them with minimal assertions.

Any guidance or code examples using Playwright and TypeScript would be greatly appreciated!

Is it possible?

1
  • 1
    Please provide more concrete info with specific code and configuration examples which you are currently using. Commented Jul 3 at 3:57

1 Answer 1

0

Don't do it.

If something is so trivial that it's failure should not affect your CI/CD pipeline,then why in the first place its there???

On the other hand , there are ways to handle flakiness on multiple levels and in different ways based on individual scenarios. One on high level general strategy and other on tactical level on specific cases based on actual scenarios.I have been personally running more than 5k tests regularly with consistent results.

General Advice: If something is so trivial that it should not affect the pipeline in any case, remove those tests/steps from the pipeline and run it as a separate regression suite in nightly long regression runs.You can do that using tags or skip options without actually removing them from suites.

Not the answer you're looking for? Browse other questions tagged or ask your own question.