Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor config and integrate tests #23

Merged
merged 16 commits into from
Jul 9, 2024
Merged

Conversation

frankroeder
Copy link
Owner

No description provided.

@frankroeder frankroeder self-assigned this Jul 3, 2024
@eterps
Copy link
Contributor

eterps commented Jul 4, 2024

@frankroeder , would it be okay for you if we move the helper functions to a separate module file like config_utils.lua?

I think it could be helpful in making things easier to test.
Currently I am testing changes by restarting nvim after every change and logging things, which is quite cumbersome.

A separate file like config_utils.lua could enable easy automated testing using plenary-test, which could look like f.i.:

describe('config_utils', function()
  describe('merge_providers', function()
    it('should merge default and user providers', function()
      local default_providers = {
        pplx = {
          api_key = '',
          endpoint = 'https://api.perplexity.ai/chat/completions',
          topic_prompt = 'default prompt',
          topic_model = 'llama-3-8b-instruct',
        },
        ollama = {
          endpoint = 'http://localhost:11434/api/chat',
          topic_prompt = 'Summarize the chat above in max 3 words',
          topic_model = 'mistral:latest',
        },
      }
      local user_providers = {
        pplx = { api_key = '123' },
        ollama = { endpoint = 'http://localhost:8000/api/chat' },
      }

      local result = config_utils.merge_providers(default_providers, user_providers)

      assert.are.same({
        pplx = {
          api_key = '123',
          endpoint = 'https://api.perplexity.ai/chat/completions',
          topic_prompt = 'default prompt',
          topic_model = 'llama-3-8b-instruct',
        },
        ollama = {
          endpoint = 'http://localhost:8000/api/chat',
          topic_prompt = 'Summarize the chat above in max 3 words',
          topic_model = 'mistral:latest',
        },
      }, result)
    end)
  end)

This way we could capture complexity in simple examples and making testing 'by hand' unnecessary.
Also it would be known when existing behaviour would be broken at some point in the future.

@frankroeder frankroeder added the enhancement New feature or request label Jul 4, 2024
@frankroeder
Copy link
Owner Author

@eterps I already thought about testing and you finally initiated it with this PR.

@frankroeder frankroeder changed the title Refactor config merge Jul 6, 2024
@frankroeder
Copy link
Owner Author

@eterps , could you please double-check the additions I’ve made and confirm if everything is functioning as it should? I’ve been using this branch daily and haven’t encountered any problems, except for one case. If you transition from the previous version where all providers are enabled by default, this branch will generate a few errors when trying to access certain values stored in the state.json file. One possible solution could be to suggest deleting the state.json file.

For any further changes, it would be best to create a new issue or pull request. It would also be helpful if we could outline the next steps somewhere.

@eterps
Copy link
Contributor

eterps commented Jul 7, 2024

@frankroeder I have seen your additions, they look good to me. I also have been on this branch daily without problems.

If you transition from the previous version where all providers are enabled by default, this branch will generate a few errors when trying to access certain values stored in the state.json file. One possible solution could be to suggest deleting the state.json file.

While deleting the state.json file might work as a solution, it could also indicate an actual bug. It's possible that the errors also occur when someone stops using a certain provider in their user config. I can check tomorrow whether that is the case.

For any further changes, it would be best to create a new issue or pull request.

I agree.
So far the process has been a bit messy for me, but I learned a great deal about Lua and also about nvim plugin development.

It would also be helpful if we could outline the next steps somewhere.

Are you referring to further cleanup or the planning of new features to add?

With regard to new features I think we can get inspiration from:

Additionally, I recently learned about rocks.nvim. If this approach becomes successful, it might be interesting to publish parrot.nvim on LuaRocks because then it would be the first Neovim LLM integration plugin that can be installed from rocks.nvim.

- add an appropriate plenary test
- fix state_spec test state file names
@eterps
Copy link
Contributor

eterps commented Jul 8, 2024

With respect to my earlier comment:

Are you referring to further cleanup or the planning of new features to add?

I think we still need to fix the system prompt issue with Anthropic (i.e. giving the provider configurations the same structure for end-users).

Because I just discovered a bug. If you activate a custom command using Anthropic provider and switch back to a general chat window, the system prompt keeps the value of the (previously activated) custom command (messing up the general chat system prompt).

@eterps
Copy link
Contributor

eterps commented Jul 8, 2024

Another useful feature is being able to customise assitant prompt (i.e. agent prefix/suffix) in the responses for custom commands (for example my 'proofreader' command could have a different emoji like 👓 instead of 🦜).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
2 participants