-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-language support #17
Comments
@rohitpaulk Are you able to provide the initial logs up until the first or two generated tests so we can investigate it? For example:
|
@rohitpaulk There's also a |
Just checked
I don't have the logs from when I ran this, but I did find a file called "run.log" in case that helps. Contents:
|
Also found OverviewYou are a code assistant that generates unit tests and adds them to an existing test file. First, carefully analyze the provided code. Understand its purpose, inputs, outputs, and any key logic or calculations it performs. Spend significant time considering all different scenarios, including boundary values, invalid inputs, extreme conditions, and concurrency issues like race conditions and deadlocks, that need to be tested. Next, brainstorm a list of test cases you think will be necessary to fully validate the correctness of the code and achieve 100% code coverage. For each test case, provide a clear and concise comment explaining what is being tested and why it's important. After each individual test has been added, review all tests to ensure they cover the full range of scenarios, including how to handle exceptions or errors. For example, include tests that specifically trigger and assert the handling of ValueError or IOError to ensure the robustness of error handling. Source FileHere is the source file that you will be writing tests against:
Test FileHere is the file that contains the test(s):
Previous Iterations Failed TestsBelow is a list of failed tests that you generated in previous iterations, if available. Very important - Do not generate these same tests again:
Code CoverageThe following is the code coverage report. Use this to determine what tests to write as you should only write tests that increase the overall coverage:
ResponseYour response shall contain test functions and their respective comments only within triple back tick code blocks. This means you must work with the existing imports and not provide any new imports in your response. Each test function code blocks must be wrapped around separate triple backticks and should not include the language name. Ensure each test function has a unique name to avoid conflicts and enhance readability. A sample response from you in Python would look like this:
Notice how each test function is surrounded by ```. |
Awesome. That was extremely helpful. So first of all it looks like we'll need an indent for your tests cases like we do for Python classes in
What would be the most helpful (and probably the easiest for you) would be to modify |
I tried with java and jacoco coverage but it appears to only be Python. Looking at the CoverageProcessor.py it appears cobertura is the only coverage_type. If you run my docker image docker run --rm -it --name cover-agent -e OPENAI_API_KEY= -v ~/code/output:/mnt/output davidparry/cover-agent-ubuntu-aarch64:debug I love this idea though and really want it to increase my java code coverage on our projects. |
Has this tool been verified for Jest unit tests written in TypeScript? |
@rohitpaulk and others: I will work tomorrow on the prompt and logic area, and will bring some improvements. stay tuned. |
So it doesn't support multi languages yet ?
I'm really interested on testing it with typescript and php. |
This should bring significant improvements to the general usage, and specifically for non-python languages: |
any confirmation of it working with ruby? |
@jtoy et al, we finally got the Ruby example added to the repo. It's also part of our nightly regression testing so we'll know right away if support for Ruby breaks. I'm going to close out this issue now since we've confirmed support for Ruby. |
I was trying this on a Ruby codebase, and the suggested tests seemed to be Python tests. The README seems to mention that multi-language support is present.
Example of a generated test:
and here's what a test in the file currently looks like:
This is what my usage looks like:
The text was updated successfully, but these errors were encountered: