robertofarias
Published on

Developing a Python Script to Streamline LLM-Assisted Code Editing

I've been working with LLMs for some time, and I've found that adding complete files from my project (where I want the LLM to fix/edit things) almost always works wonders. The optimal version of this problem is that when I want to do a refactor or implement a new feature, the LLM will extract the files (or parts of the files) relevant to the problem and then work with that context. I haven't found a way to do this yet (or at least a way that works for me), so I think there is still room for using this kind of tool.

I tried several versions in the last couple of months (it's always a work in progress), but I think that I'm finally happy with this version. I made it mainly by tinkering with o1-mini. I just manually added the patterns that I wanted to ignore.

The script is here; it's a simple Python script that uses fzf to interactively select files and then pbcopy to copy the content to the clipboard.

Here is the demo:

Note about the demo:

  • I created a function in my .zshrc:
    gen_context() {
        python3 ~/.scripts/gen_context.py "$@"
    }
    

How I Use This

In practice, what I do with a large codebase is first ask Copilot to implement/give me ideas to solve a problem using the "send to workspace" (Command + Enter). Then, when I want to iterate over the solution or a specific part of the code, I use this script to get the context of the code that I want to work with.

Why not use Copilot for everything?
I found that I get better results using my own prompts and having more control over the conversation with the LLM.

For example:

  • I run the same prompts several times to get different solutions.
  • Sometimes I ask something and notice that the LLM is not understanding the problem correctly, so I remove/edit some previous messages.
  • Sometimes I just want to work with a specific part of the code because if you give everything to the LLM, the answer will be biased toward what you provided and not always propose the best solution. So limiting what the LLM can "see" is beneficial in some cases.

Final Thoughts

This is the way I found to work with LLMs more efficiently. I think there is a lot of room for improvement, and I'm always looking for new ideas to enhance my workflow. In the future, I expect that with more powerful tools, we could do this kind of thing more easily. Instead of spending much time on the code, we will spend more time on how to connect things and develop the product as a whole.