We are really getting there this time. Let’s review what we have done so far with Magic Poi and then I will try to make some guesses about when the actual launch will happen!
What is done?
test board 1 (some issues)
test board 2 (some issues, better than #1)
battery monitor upgrade
LED strip upgrade
parts design and 3d modeling for outer shell
battery charger prototyping and testing
test board 3 design (ready for order)
Still todo:
order test board #3 (this week!)
1-2 weeks until delivery
put everything together
put the charger together
test everything (spin actual poi!)
build 3 complete pairs of poi for the 3 founders (Tom, Brett and Dylan)
build the code out from “development” version to “full spec”
test code
finalise design
offer “Alpha” boards and full “Magic Poi Alpha” poi for sale to early adopters (target Circus Scientist followers, Patreon supporters get a discount!)
refine firmware and server code with feedback from early adopters
set up Indigogo campaign
marketing and advertising
mass production
ETA?
As you can see, we still have a lot to do! A lot hinges on this latest circuit – if it is sound then the rest will not take long. The code doesn’t even need to be finished for us to sell Magic Poi – the old SmartPoi firmware is stable, and works on the new hardware just fine. And you can update to full Magic Poi firmware at any time (OTA update is built in).
Dylan from EnterAction is a qualified plastics engineer, so I have complete confidence that end result is going to be aesthetically very pleasing to look at and use.
Keep following for more updates! A few more months to go?! Now is a good time to sign up on my Patreon – to support my AI coding and hosting costs as we ride the downhill to product launch. All paid supporters will get a hefty discount on the prototype poi when it comes out – and a direct line to give feedback on the features we build in as we grow Magic Poi.
“Vibe Coding” is the new thing – I consider myself a proponent of it. Coming from my “Copy Paste from StackOverflow” style it was a natural fit.
Having said that, I still (mostly) try to read the code. Sometimes you have to – especially in PlatformIO. AI simply doesn’t work as well for embedded code – probably because a lot of the good stuff is proprietary and not available to LLM scraping. Less code means less effective models. That ends up being a bit of a hassle when trying to do the code – test – fix loop. First of all it’s actually code – upload – monitor – test – fix if you are working with microcontrollers (never mind ones with web servers running, or interacting with web servers and apps!).
Simply put, I waste a lot of time waiting for the AI to finish coding so I can run a command or press a button to upload and check something.
Introducing Cline
Cline is an open source VSCode AI extension that works with open source models (you can also pay for a better experience). It can edit multiple files at once, run commands and interface with MCP servers! That’s all I needed for my test.
In the video below I show how you can easily get AI to do the coding loop for you – automating the boring stuff, as well as writing the code. For this test I actually went with the FREE version of DeepSeek 8B, which apparently can run on your own GPU (I don’t have one, so I’m using OpenRouter here).
This is a pattern that I plan on building on in the future. I show how to add a file with commands for Cline to run (upload, monitor) and how I bring it all together with one big prompt so that Cline actually runs everything on its own!
Future plans
I have also done something similar with the puppeteer MCP to automate browser testing. I run an embedded server on an ESP32 with automated embedded coding and monitoring – and puppeteer to do button clicking after code changes to test everything works. I should make a video about that.
I still have to make a big instruction file for Cline – with all of the ideas here inside. Then hopefully I can just tell it to make me a project in full and it will do it (beware, Cline seems to suck up tokens really fast!)
I do want to add Aider as an MCP for the coding part, though. Aider still has a lot of advantages, especially it’s git integration and configurability (maybe I’m just used to it). Hopefully the MCP option can seamlessly integrate with normal use (same config files and history?), I haven’t looked yet.
Also, I’m not very good at this part, but debugging is important – sometimes it’s essential. I would love to automate the debugging part, using AI. Maybe a PlatformIO debugging MCP server? We don’t have one yet…
MCP is the new AI buzzword. Being a bit involved in the AI-enhanced programming of my own projects, it escaped my attention until about a week ago. It’s time to have a look.
What I am using
Since I don’t want to pay any money (MCP can drain your tokens pretty quick!) I tried setting this up using local models first – but they are very slow on my laptop and I went with Deepseek Chat which is cheap for this test.
DeepSeek – get your api key (or sign up for OpenRouter and use the free rate limited one!)
Example using ChatMCP
I will be using this simple calculator MCP as an example: – https://github.com/githejie/mcp-server-calculator I just happened to have qwen2.5-coder:1.5b already installed in Ollama so that’s the one I am using (it supports tools) actually I used Deepseek Chat – Ollama is a bit slow on my laptop (it does work though).
In ChatMCP we add the tool like so:
After configuring my Deepseek API key in the settings (bottom right) I choose it from the menu.
DeepSeek Chat works fine (and it’s cheaper). I also got qwen2.5-coder to call tools, it’s a bit slow on my laptop, however (requires Ollama to be running in the background and I don’t have a GPU).
You need to enable the tool:
Then just make the request:
As you can see the AI used the calculator tool (spanner icon) to answer the request! There are so many tools available, from web scraping to controlling your android phone! I even made my own MCP tool to turn on an LED.
I just took a photo with my Android phone by telling the AI to do it for me (using phone-mcp)! What will your MCP enabled AI assistant be able to do?
NOTES
You can add MCP tools to your coding assistant now (eg. Cursor). I am using Cline which has a plugin for VSCode and allows for Deepseek API use (I already pay for this). The configuration looks like this (same format for “MCP Client for Ollama”):
As you can see, uvx solves a lot of configuration long story here – otherwise you have to specify the path of your virtual environment.
The most common MCP servers are Node based, or Python. I am using Python as it’s my preferred language. Node is pretty similar, just use npx instead of uv.
Next Steps
Next up: converting all of my code to work with MCP. Seriously – if you aren’t MCP compatible, then you need to work on it, I think in the future this will be very important. Check out FastMCP for python implementation.