I said I would be using OpenCode again and today I really gave it a spin. I am impressed!
Check out the video below – where I try out the official Chrome Devtools MCP and get the AI to click around on the website to make sure the filters all work (OK, I know they work it’s just a demo!)
The DeepSeek credits used in the making of this video (and most of the codebase) were paid for by Patreon subscribers. Thanks – especially to Flavio, my biggest supporter. Check out his amazing LED hula hoops and more here: https://www.instagram.com/hoop_roots_lovers/
For website updates, it is helpful to have the AI automate some things, to check that the updates work irl. Recently Google launched their official Crome Devtools MCP. I heard it works great with claude cli. Turns out it works great with claude cli hacked to use DeepSeek api also!
Just run: claude mcp add chrome-devtools -- npx chrome-devtools-mcp@latest
Now you can run the claude-code-router in your coding project directory:
ccr code
and then type /mcp to check – the devtools should be listed.
Running
You can do this in several ways, but I like to run the browser and get the mcp to connect to already running session. On my Arch linux laptop I do this:
You need to specify the default debug port and a directory for your session (if you don’t add a temp directory, the AI might have access to your Chrome session, logins etc. Maybe you want that?). Chrome will think you are starting it for the first time – at least the first time you do this – so click through the prompts.
After Chrome is running, you can ask claude cli (with DeepSeek api back-end) to do stuff in your browser. Just ask it to use chrome-devtools-mcp and it will.
I am hoping to use this a lot, make changes and then get the AI to test everything works, iterate..
Don’t do this
ccr code --dangerously-skip-permissions
– gives the ai full control of your computer and no need to ask permission from you to do whatever it wants. Including finishing your project while you watch anime! (Or deleting everything on your hard drive…).
OK I didn’t do it all by myself – I just ran the reverse engineered proxy called claude-code-router which is already set up.
On Arch linux it’s as easy as doing:
yay claude-code
yay claude-code-router
# and then finally:
ccr code
“ccr” runs claude cli behind a proxy – and routes all requests to your LLM service of choice (even local LLM if you have a powerful enough computer). When it runs for the first time you get to add your api details (in my case DeepSeek api).
You can also install it in other ways – like with npm/npx.
Many people were telling me about how good Claude Code CLI is. Now I can try out Claude Code CLI without paying $$$ – and it works! Even paying Claude Code users are doing this, because they run out of credits (Claude is expensive) and DeepSeek is just as good?
I don’t even have an Anthropic account!
I am still going to be using Aider for my main coding assistant, because I still prefer the control it offers. But for some things having options like MCP integration and proper vibe coding flow is important (like my new take on “Space Invaders” done with Pygame, coming soon!)
PS: Claude Code and Aider share many similarities when it comes to syntax. commands start with “/”, files are addressed with “@”, you can “/clear” to remove chat history from context… I wonder which one came first?
For the money of course! The brief is to “Use AI to do the challenge” so it’s definitely up my alley. Also, they will be having some industry experts to learn from, so I hope to learn something also.
“Vibe Coding” is the new thing – I consider myself a proponent of it. Coming from my “Copy Paste from StackOverflow” style it was a natural fit.
Having said that, I still (mostly) try to read the code. Sometimes you have to – especially in PlatformIO. AI simply doesn’t work as well for embedded code – probably because a lot of the good stuff is proprietary and not available to LLM scraping. Less code means less effective models. That ends up being a bit of a hassle when trying to do the code – test – fix loop. First of all it’s actually code – upload – monitor – test – fix if you are working with microcontrollers (never mind ones with web servers running, or interacting with web servers and apps!).
Simply put, I waste a lot of time waiting for the AI to finish coding so I can run a command or press a button to upload and check something.
Introducing Cline
Cline is an open source VSCode AI extension that works with open source models (you can also pay for a better experience). It can edit multiple files at once, run commands and interface with MCP servers! That’s all I needed for my test.
In the video below I show how you can easily get AI to do the coding loop for you – automating the boring stuff, as well as writing the code. For this test I actually went with the FREE version of DeepSeek 8B, which apparently can run on your own GPU (I don’t have one, so I’m using OpenRouter here).
This is a pattern that I plan on building on in the future. I show how to add a file with commands for Cline to run (upload, monitor) and how I bring it all together with one big prompt so that Cline actually runs everything on its own!
Future plans
I have also done something similar with the puppeteer MCP to automate browser testing. I run an embedded server on an ESP32 with automated embedded coding and monitoring – and puppeteer to do button clicking after code changes to test everything works. I should make a video about that.
I still have to make a big instruction file for Cline – with all of the ideas here inside. Then hopefully I can just tell it to make me a project in full and it will do it (beware, Cline seems to suck up tokens really fast!)
I do want to add Aider as an MCP for the coding part, though. Aider still has a lot of advantages, especially it’s git integration and configurability (maybe I’m just used to it). Hopefully the MCP option can seamlessly integrate with normal use (same config files and history?), I haven’t looked yet.
Also, I’m not very good at this part, but debugging is important – sometimes it’s essential. I would love to automate the debugging part, using AI. Maybe a PlatformIO debugging MCP server? We don’t have one yet…
MCP is the new AI buzzword. Being a bit involved in the AI-enhanced programming of my own projects, it escaped my attention until about a week ago. It’s time to have a look.
What I am using
Since I don’t want to pay any money (MCP can drain your tokens pretty quick!) I tried setting this up using local models first – but they are very slow on my laptop and I went with Deepseek Chat which is cheap for this test.
DeepSeek – get your api key (or sign up for OpenRouter and use the free rate limited one!)
Example using ChatMCP
I will be using this simple calculator MCP as an example: – https://github.com/githejie/mcp-server-calculator I just happened to have qwen2.5-coder:1.5b already installed in Ollama so that’s the one I am using (it supports tools) actually I used Deepseek Chat – Ollama is a bit slow on my laptop (it does work though).
In ChatMCP we add the tool like so:
After configuring my Deepseek API key in the settings (bottom right) I choose it from the menu.
DeepSeek Chat works fine (and it’s cheaper). I also got qwen2.5-coder to call tools, it’s a bit slow on my laptop, however (requires Ollama to be running in the background and I don’t have a GPU).
You need to enable the tool:
Then just make the request:
As you can see the AI used the calculator tool (spanner icon) to answer the request! There are so many tools available, from web scraping to controlling your android phone! I even made my own MCP tool to turn on an LED.
I just took a photo with my Android phone by telling the AI to do it for me (using phone-mcp)! What will your MCP enabled AI assistant be able to do?
NOTES
You can add MCP tools to your coding assistant now (eg. Cursor). I am using Cline which has a plugin for VSCode and allows for Deepseek API use (I already pay for this). The configuration looks like this (same format for “MCP Client for Ollama”):
As you can see, uvx solves a lot of configuration long story here – otherwise you have to specify the path of your virtual environment.
The most common MCP servers are Node based, or Python. I am using Python as it’s my preferred language. Node is pretty similar, just use npx instead of uv.
Next Steps
Next up: converting all of my code to work with MCP. Seriously – if you aren’t MCP compatible, then you need to work on it, I think in the future this will be very important. Check out FastMCP for python implementation.
Tech used: FastMCP (Python) Arduino Uno (Serial) and Cline for VSCode (MCP Client), with DeepSeek Reasoner as the LLM – massive overkill just to turn on an LED!
This project was built using Aider – pair programming in your terminal. https://aider.chat/
Check out the demo video where I tell the AI to turn on my LED – all done from Cline agent inside VSCode!
I have been saying for some time now that I wanted to train an image generation model on a ton of POV Poi images to make one that I can use for Magic Poi image generation.
“I want an celtic knot image – blue with black background” should ideally generate something usable on the poi. Up until recently tools like Dalle, Stable Diffusion and Midjourney simply weren’t up to the task. The type of image that works on my poi (dark background, primary colours, pixel perfect, small size) just weren’t an option – most pics end up blurry and just look worse than making it yourself in Photoshop*. Don’t even get me started on how annoying the ChatGPT image generator is! “Image needs to have no white, only primary colours” (white image generated) “Zero White PIXELS!!!!” (Still blinds me with glaring white background)
I recently went looking again and actually found some! There is a service aptly named “thereisanaiforthat.com” with a ton of free services listed. I have linked the best ones below.
I am not sure, you may need to sign in for these to actually work but all of the ones I linked are free!
I still have that trained image generator for poi pics on my list of things to do, but it seems I may have a better starting point to work from soon, if these projects are anything to go by. The field is advancing rapidly!
*I actually use Krita, open source image editing software with Pixel brush. It’s really good.
I use the open source and very capable Aider for AI code generation. Anyone who has tried AI code generation has heard of “context” – the information you send over to the LLM so that it can know what the code looks like that you want to modify.
Anyone using these tools eventually comes up against the dreaded token limit at some point – some sooner rather than later! The LLM can only keep so much context in memory at one time. This is annoying if you are trying to update a legacy codebase with multiple files and countless lines of code – who wants to try to pare this down to only the relevant parts, copy and paste into the window?
In Aider adding files to context is as simple as /add file.txt To remove files just do /drop file.txt If file.txt is 100 000 lines long you have to copy and paste the parts you want (up to 60 000 tokens), check how many you have left by running /tokens But that is takes too long! It defeats the purpose of getting AI to do the reading and modifying for you (remember to review the diffs after!)
Meet LLMAP
llmap is “context extraction at scale”. The tool can search and sumarise vast amounts of code and ouput only the relevant parts which you can then add to your AI coding tool context – leaving out all irrelevant parts (usually most of it!) Example from my own use case: I had an issue with a looping api request on ESP32 in my Magic Poi project – I had been concentrating on the battery monitor feature and the new feature broke some other functionality. So I did a diff with the last known working branch: git diff origin/Battery_Monitor_Main_Merge > context_git_diff.txt
This diff is from way back though so it was too large for the context window (if you include all of the code I wanted to update) so I had to run it through llmap with a query: echo context_git_diff.txt | llmap "list the changes made that affect the control flow of the application." > llmap_diff_context.txt
Now my new file llmap_diff_context.txt with only the relevant information could be added to the context – using /read-only since it’s not included in git nor do I want to edit it. I used /architect mode* to see what changes happened to cause the loop. Turns out it was a simple misplaced line of code and everything worked again! *for me architect mode is configured to use DeepSeek R1 for thinking and V3 for editing – cheap and effective
LLMAP is easy to use – just install via pip. By the way, if you don’t have a DeepSeek api key, the update I recently submitted to add OpenRouter support has been merged but not yet published. You will have to download and install llmap manually to use OpenRouter – but it works.
Full credit to Johnathan Ellis for creating and sharing this great tool. You gotta love Open Source!