I've drafted an architecture, with the steps mainly as so:
1. Collect actions (grep/glob/read) policies either from usage logs or open datasets
2. Optimize by removing redundant actions or parallelization
3. Train model on optimized action policy
4. Release model as a single file, MCP tool
(Refer to repo for visual diagram of the architecture)
I've just released the base model and added `openai_forwarder.py` to start collecting action policies.
Looking for more eyes and contributors to make this a reality, thanks!
Inspired by SWE-grep, I've started a repo to educate myself on it here https://github.com/aperoc/op-grep.
I've drafted an architecture, with the steps mainly as so: 1. Collect actions (grep/glob/read) policies either from usage logs or open datasets 2. Optimize by removing redundant actions or parallelization 3. Train model on optimized action policy 4. Release model as a single file, MCP tool (Refer to repo for visual diagram of the architecture)
I've just released the base model and added `openai_forwarder.py` to start collecting action policies.
Looking for more eyes and contributors to make this a reality, thanks!