Agentic Embedded Development

“Vibe Coding” is the new thing – I consider myself a proponent of it. Coming from my “Copy Paste from StackOverflow” style it was a natural fit.

Having said that, I still (mostly) try to read the code. Sometimes you have to – especially in PlatformIO. AI simply doesn’t work as well for embedded code – probably because a lot of the good stuff is proprietary and not available to LLM scraping. Less code means less effective models. That ends up being a bit of a hassle when trying to do the code – test – fix loop. First of all it’s actually code – upload – monitor – test – fix if you are working with microcontrollers (never mind ones with web servers running, or interacting with web servers and apps!).

Simply put, I waste a lot of time waiting for the AI to finish coding so I can run a command or press a button to upload and check something.

Introducing Cline

Cline is an open source VSCode AI extension that works with open source models (you can also pay for a better experience). It can edit multiple files at once, run commands and interface with MCP servers! That’s all I needed for my test.

In the video below I show how you can easily get AI to do the coding loop for you – automating the boring stuff, as well as writing the code. For this test I actually went with the FREE version of DeepSeek 8B, which apparently can run on your own GPU (I don’t have one, so I’m using OpenRouter here).

This is a pattern that I plan on building on in the future. I show how to add a file with commands for Cline to run (upload, monitor) and how I bring it all together with one big prompt so that Cline actually runs everything on its own!

Future plans

I have also done something similar with the puppeteer MCP to automate browser testing. I run an embedded server on an ESP32 with automated embedded coding and monitoring – and puppeteer to do button clicking after code changes to test everything works. I should make a video about that.

I still have to make a big instruction file for Cline – with all of the ideas here inside. Then hopefully I can just tell it to make me a project in full and it will do it (beware, Cline seems to suck up tokens really fast!)

I do want to add Aider as an MCP for the coding part, though. Aider still has a lot of advantages, especially it’s git integration and configurability (maybe I’m just used to it). Hopefully the MCP option can seamlessly integrate with normal use (same config files and history?), I haven’t looked yet.

Also, I’m not very good at this part, but debugging is important – sometimes it’s essential. I would love to automate the debugging part, using AI. Maybe a PlatformIO debugging MCP server? We don’t have one yet…

Playing with MCP enabled Chatbots

MCP is the new AI buzzword. Being a bit involved in the AI-enhanced programming of my own projects, it escaped my attention until about a week ago. It’s time to have a look.

What I am using

Since I don’t want to pay any money (MCP can drain your tokens pretty quick!) I tried setting this up using local models first – but they are very slow on my laptop and I went with Deepseek Chat which is cheap for this test.

Essential Programs

  1. Ollama – run LLM’s on your own computer
  2. MCP Client for Ollama – allows your local models to connect to MCP’s and for you to configure and control everything from the command line OR:
  3. ChatMCP – cross platform GUI program for chatting to MCP enhanced LLM’s. Configure any LLM from api (Deepseek, Claude, OpenAI) to Ollama.
  4. MCP’s – there are literally thousands of these already! Some lists I found:
    https://github.com/modelcontextprotocol/servers
    https://glama.ai/mcp/servers
  5. DeepSeek – get your api key (or sign up for OpenRouter and use the free rate limited one!)


Example using ChatMCP

I will be using this simple calculator MCP as an example:
https://github.com/githejie/mcp-server-calculator
I just happened to have qwen2.5-coder:1.5b already installed in Ollama so that’s the one I am using (it supports tools) actually I used Deepseek Chat – Ollama is a bit slow on my laptop (it does work though).

In ChatMCP we add the tool like so:

After configuring my Deepseek API key in the settings (bottom right) I choose it from the menu.

DeepSeek Chat works fine (and it’s cheaper). I also got qwen2.5-coder to call tools, it’s a bit slow on my laptop, however (requires Ollama to be running in the background and I don’t have a GPU).

You need to enable the tool:

Then just make the request:

As you can see the AI used the calculator tool (spanner icon) to answer the request! There are so many tools available, from web scraping to controlling your android phone! I even made my own MCP tool to turn on an LED.

I just took a photo with my Android phone by telling the AI to do it for me (using phone-mcp)! What will your MCP enabled AI assistant be able to do?

NOTES

You can add MCP tools to your coding assistant now (eg. Cursor). I am using Cline which has a plugin for VSCode and allows for Deepseek API use (I already pay for this). The configuration looks like this (same format for “MCP Client for Ollama”):

{
  "mcpServers": {
    "hello-world-server": {
      "disabled": false,
      "timeout": 60,
      "command": "/run/media/tom/9109f38b-6b5f-4e3d-a26f-dd920ac0edb6/Manjaro-Home-Backup/3717d0b5-ba54-4c0a-8e8d-407af5c801bd/@home/tom/Documents/PROGRAMMING/Python/mcp_servers/hello_world/.venv/bin/python",
      "args": [
        "-u",
        "/run/media/tom/9109f38b-6b5f-4e3d-a26f-dd920ac0edb6/Manjaro-Home-Backup/3717d0b5-ba54-4c0a-8e8d-407af5c801bd/@home/tom/Documents/PROGRAMMING/Python/mcp_servers/hello_world/server_mcp.py"
      ],
      "env": {
        "PYTHONUNBUFFERED": "1"
      },
      "transportType": "stdio"
    },
    "blink-led-server": {
      "disabled": false,
      "timeout": 60,
      "command": "/run/media/tom/9109f38b-6b5f-4e3d-a26f-dd920ac0edb6/Manjaro-Home-Backup/3717d0b5-ba54-4c0a-8e8d-407af5c801bd/@home/tom/Documents/PROGRAMMING/Python/mcp_servers/mcp_duino/.venv/bin/python",
      "args": [
        "/run/media/tom/9109f38b-6b5f-4e3d-a26f-dd920ac0edb6/Manjaro-Home-Backup/3717d0b5-ba54-4c0a-8e8d-407af5c801bd/@home/tom/Documents/PROGRAMMING/Python/mcp_servers/mcp_duino/server_mcp.py"
      ],
      "env": {},
      "transportType": "stdio"
    },
    "github.com/modelcontextprotocol/servers/tree/main/src/puppeteer": {
      "disabled": false,
      "timeout": 60,
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "--init",
        "-e", "DOCKER_CONTAINER=true",
        "-e", "DISPLAY=$DISPLAY",
        "-v", "/tmp/.X11-unix:/tmp/.X11-unix:rw",
        "--security-opt", "seccomp=unconfined",
        "mcp/puppeteer",
        "--disable-web-security",
        "--no-sandbox",
        "--disable-dev-shm-usage"
      ],
      "env": {},
      "transportType": "stdio"
    },
    "phone-mcp": {
      "command": "uvx",
      "args": [
        "phone-mcp"
      ]
    },
    "calculator": {
      "command": "uvx",
      "args": [
        "mcp-server-calculator"
      ]
    }
  }
}

As you can see, uvx solves a lot of configuration long story here – otherwise you have to specify the path of your virtual environment.

The most common MCP servers are Node based, or Python. I am using Python as it’s my preferred language. Node is pretty similar, just use npx instead of uv.

Next Steps

Next up: converting all of my code to work with MCP. Seriously – if you aren’t MCP compatible, then you need to work on it, I think in the future this will be very important. Check out FastMCP for python implementation.

FastMCP LED Control Server

I made a Python program using MCP protocol to interface between AI and an LED.

Source Code: https://github.com/tomjuggler/BlinkMCPServer – includes Arduino sketch and MCP python server with example Cline json config.

Tech used: FastMCP (Python) Arduino Uno (Serial) and Cline for VSCode (MCP Client), with DeepSeek Reasoner as the LLM – massive overkill just to turn on an LED!


This project was built using Aider – pair programming in your terminal.
https://aider.chat/

Check out the demo video where I tell the AI to turn on my LED – all done from Cline agent inside VSCode!

AI Generated Images for POV Poi

I have been saying for some time now that I wanted to train an image generation model on a ton of POV Poi images to make one that I can use for Magic Poi image generation.

“I want an celtic knot image – blue with black background” should ideally generate something usable on the poi. Up until recently tools like Dalle, Stable Diffusion and Midjourney simply weren’t up to the task. The type of image that works on my poi (dark background, primary colours, pixel perfect, small size) just weren’t an option – most pics end up blurry and just look worse than making it yourself in Photoshop*. Don’t even get me started on how annoying the ChatGPT image generator is! “Image needs to have no white, only primary colours” (white image generated) “Zero White PIXELS!!!!” (Still blinds me with glaring white background)

I recently went looking again and actually found some! There is a service aptly named “thereisanaiforthat.com” with a ton of free services listed. I have linked the best ones below.

Some Example Images

Services:

https://free.theresanaiforthat.com/@kadi_228/pattern-generator-srcset

https://free.theresanaiforthat.com/@zurcher/decorative-patterns

More to view here:
https://free.theresanaiforthat.com/pattern-images/

I am not sure, you may need to sign in for these to actually work but all of the ones I linked are free!

I still have that trained image generator for poi pics on my list of things to do, but it seems I may have a better starting point to work from soon, if these projects are anything to go by. The field is advancing rapidly!

*I actually use Krita, open source image editing software with Pixel brush. It’s really good.

LLMAP advanced context generation for AI coding

I use the open source and very capable Aider for AI code generation. Anyone who has tried AI code generation has heard of “context” – the information you send over to the LLM so that it can know what the code looks like that you want to modify.

Anyone using these tools eventually comes up against the dreaded token limit at some point – some sooner rather than later! The LLM can only keep so much context in memory at one time. This is annoying if you are trying to update a legacy codebase with multiple files and countless lines of code – who wants to try to pare this down to only the relevant parts, copy and paste into the window?

In Aider adding files to context is as simple as /add file.txt
To remove files just do /drop file.txt
If file.txt is 100 000 lines long you have to copy and paste the parts you want (up to 60 000 tokens), check how many you have left by running /tokens
But that is takes too long! It defeats the purpose of getting AI to do the reading and modifying for you (remember to review the diffs after!)

Meet LLMAP

llmap is “context extraction at scale”. The tool can search and sumarise vast amounts of code and ouput only the relevant parts which you can then add to your AI coding tool context – leaving out all irrelevant parts (usually most of it!)
Example from my own use case: I had an issue with a looping api request on ESP32 in my Magic Poi project – I had been concentrating on the battery monitor feature and the new feature broke some other functionality. So I did a diff with the last known working branch:
git diff origin/Battery_Monitor_Main_Merge > context_git_diff.txt

This diff is from way back though so it was too large for the context window (if you include all of the code I wanted to update) so I had to run it through llmap with a query:
echo context_git_diff.txt | llmap "list the changes made that affect the control flow of the application." > llmap_diff_context.txt

Now my new file llmap_diff_context.txt with only the relevant information could be added to the context – using /read-only since it’s not included in git nor do I want to edit it. I used /architect mode* to see what changes happened to cause the loop. Turns out it was a simple misplaced line of code and everything worked again!
*for me architect mode is configured to use DeepSeek R1 for thinking and V3 for editing – cheap and effective

LLMAP is easy to use – just install via pip. By the way, if you don’t have a DeepSeek api key, the update I recently submitted to add OpenRouter support has been merged but not yet published. You will have to download and install llmap manually to use OpenRouter – but it works.

Full credit to Johnathan Ellis for creating and sharing this great tool. You gotta love Open Source!

Building 10 websites at once using AI

Using Aider AI coding assistant and Deepseek api I got 10 usable websites made in about 30 minutes. This is just a test of what is possible with AI coding – the only things I did on the websites in the video was add a few images to make them work.

To have your website created and hosted with the help of AI visit https://devsoft.co.za – obviously if you pay me I will do it properly!

I think above all this video demonstrates that AI is a tool, but needs proper guidance to get it right.

The Details

I made a script which used Aider:

*I found the problem I had in the video now – no mention of image file format!

#!/bin/bash                                                                                                                                                                                                      
                                                                                                                                                                                                                  
set -eo pipefail                                                                                                                                                                                                 
shopt -s nullglob
                                                                                                                                                                                                                  
 # Configure Deepseek API                                                                                                                                                                                         
 export AIDER_MODEL="deepseek/deepseek-reasoner"
 export DEEPSEEK_API_KEY="$DEEPSEEK_API_KEY"                                                                                                                                                                
                                                                                                                                                                                                                  
 # Process each website folder                                                                                                                                                                                    
 for dir in */ ; do                                                                                                                                                                                               
     (                                                                                                                                                                                                            
         echo "Building website in: $dir"                                                                                                                                                                         
         images=("${dir}images"/*)
         # Use aider to build website from brief.txt (paths relative to git root)                                                                                                                                                              
         aider --yes-always --no-auto-commits --no-stream --message "$(printf "Make a website (or improve the existing site) using the following instructions. Make sure to use only the images in images folder (they are always named img img1 img2 etc) when building the site. If any other images are referenced, remove them. Do not leave the website unfinished, add placeholder content where an unfinished element would be always. \n\n%s" "$(cat "${dir}brief.txt")")"  "${dir}index.html" "${dir}script.js"  "${dir}styles.css" "${images[@]}"
     )                                                                                                                                                                                                            
 done                                                                                                                                                                                                             
                                                                                                                                                                                                                  
 echo "All 10 websites built successfully!" 

This used in particular the brief.txt in each website folder. here is an example from the video:

Custom Candle Maker: Wants a moody, scent-driven design with a product carousel, an ingredient transparency section, and a "Build Your Candle" form (wax type, fragrance, color). Mentioned competitors using "slow-motion pour videos" but prefers static images to save costs.

The 10 prompts were made using Deepseek chatbot, which I asked to create customer requirements for the 10 videos – with variety.

Conclusion

The AI does a good job but needs a lot more information than the vague short prompts I actually gave it. In a real-world scenario with an actual customer I would most likely be much more specific, with better results no doubt.

It was fun to watch, however!

I cloned ChatGPT Operator using DeepSeek R1

Operator is in the news – one of my friends recently shared a video of someone using it to buy stuff online. It runs on ChatGPT 01 or 03 reasoning model. DeepSeek R1 is just as good, right? Let’s clone it!

How I did it

  1. Found a tutorial online
    – of course I’m not the first person to try this
  2. Add the tutorial webpage to Aider context (to send to DeepSeek)
    I did skim it first, looked OK, using cool libraries and stuff
  3. Add some specifications (prompt) and tell DeepSeek to make it for me.
    using Python with Gradio and Browser Use – a way to control Chrome easily with AI.

The result

DeepSeek with Aider’s “Architect mode” did an amazing job. It required some guidance about what exactly I wanted – the tutorial was more of an inspiration than something I wanted to copy exactly. As usual there were errors and I had to spend a few hours asking the AI to debug code – as well as doing it manually on occasion. AI programming is not foolproof and will not replace us in it’s current state – it’s a tool only. A great tool – but only a tool.


I love good tools.

DeepSeek decided that it needed to record it’s progress as a gif, made up of screenshots. I didn’t ask it to do this, and it wasn’t mentioned in the tutorial either.

Here is the slideshow for “Find me the cheapest Android phone on gumtree”. It also returned the results in text format.

Conclusion

It was super fun asking my “DeepSeek Operator” to do something and then watching it click around in the browser, even working out that it had to accept cookies before continuing!

As you can see, the tools for this type of operation exist in the Open Source world already. I think that Agentic AI is almost ready to take over the browser and do a lot of your work for you – if you are willing to trust it. It could be great for research for example.

Having said that, the DeepSeek R1 model was pretty slow via api. I think that may be a problem with reasoning models in general, though. Also, it didn’t find the actual cheapest phone, I did that myself in a few seconds afterwards to double check the result.

What do you think, should I publish my Python powered OpenAI Operator clone to GitHub?

Update:

I published it! You need a DeepSeek api key for this to work*

DeepSeek Operator on GitHub

*currently the service is down again, hopefully they will find a way to scale it soon