Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When a use case run fails, is it a problem with the code under test or the test case? Something to consider.
The text was updated successfully, but these errors were encountered:
Excellent question. We don't differentiate right now because the whole purpose of Cover Agent (as of now) is to provide working tests for regressions.
We'd love your feedback on how to improve Cover Agent and find ways to tell the user that their "code is broken." Any suggestions?
When a use case run fails, is it a problem with the code under test or the test case? Something to consider.
The text was updated successfully, but these errors were encountered: