Smart Poi

Updating a Minecraft Bedrock Server

Recently I started hosting a Minecraft Bedrock server for my son – because Bedrock, it turns out, is waaay cheaper on resources than that old Java version. I’m talking 250mb of Ram here, vs 4GB for the Java version.
Anyway I followed the excellent tutorial here: https://harrk.dev/dedicated-bedrock-minecraft-server-ubuntu-setup/ , and recommend you do also – there are loads of setup tutorials also online.

Update time

Minecraft Bedrock client (Android app in our case) is constantly being updated. Because of how it works, this means the Server needs to be updated too.

Here is the best way I found so far:

  1. First find the latest version download url and copy it (https://www.minecraft.net/en-us/download/server/bedrock)
  2. Run a bash script to back everything up, download the latest version and restore your settings. I wrote one it’s here: https://gist.github.com/tomjuggler/2f039a5d0160a4526943f556e8c60f66 (just replace the version and change for your own paths and ownerships)

That’s it – except the first script I wrote I forgot to backup and restore the allowlist.json so I had to restore those. Luckily, if you know the xuid of a player (present in permissions.json) you can look up their name easily here: https://cxkes.me/xbox/xuid – or vice-versa.

DIY AI Browser with Chrome-Devtools MCP

Who needs an AI Browser when you can just use Aider-CE (Aider Community Experimentation) with Chrome Devtools to automate your browser with exactly the same result?

In the video I automate downloading some images for Halloween – for my SmartPoi project. It is pretty slow but I don’t think the AI browsers are much faster? Anyway the point is you can tell it to do something and then go away and do something important, like watching Anime.

Tech used:

  1. Aider-ce in Navigator mode
    – this fork of Aider is coming on amazingly. I have started contributing myself to this amazing open source project, the best version of the best AI coding assistant imho.
    – you can use any other MCP capable AI assistant of course. (I previously had success with OpenCode)
  2. Chrome-Devtools official MCP
  3. Chrome Browser (I think it also works with Chromium?)

Method

tested on Arch linux

  • Run browser with remote debugging: google-chrome-stable --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-test
  • Add Chrome Devtools to .aider.conf.yml file (at the bottom):
mcp-servers: |
  {
    "mcpServers": {
        "chrome-devtools": {
        "type": "stdio",
        "command": "npx",
        "args": ["-y", "chrome-devtools-mcp@latest", "--browser-url=http://127.0.0.1:9222"]
        }
    }
  }
  • Run Aider-ce and use navigator mode. For me that looks like this:
aider --model deepseek/deepseek-reasoner --editor-model deepseek/deepseek-chat --disable-playwright --yes-always
/navigator

Example Prompt:

I have visualpoi.zone already open in connected Chrome Browser. Please search for and download the top 5 Halloween images

Video:

(no sound)

Failure is part of Success

I just want to talk about failure for a bit. One of the best parts of working with Embedded systems (Arduino and PlatformIO for me) is when you get things right. Somehow all of the fails make it that much more satisfying when things do work.

Example: AI programming

I have been using AI programming heavily for more than two years now. Coding assistants such as GitHub Co-Pilot, Cursor IDE, Cline, Roo Code, gemini-cli, claude-code, opencode and aider (and many more!). All of these have their own way of doing things and take time to set up correctly. When things fail to work you always have to check: is this a failure in my understanding or the tool itself?

The most recent issue I have been addressing with various tools is “Context overrun” – basically the AI coding assistant has a limited amount of tokens or memory which it can use to do it’s text prediction (writing code). Once you fill the context up, the coding assistant needs to either drop some of the text (code) from the “context window” or the response from the associated LLM will be an error.

Some of the assistants have tools to deal with this – which can be simply dropping important files or compressing (summarising) the full text, OR just informing the user (me) that we have an issue. There is no good way to deal with this. We can lose important information either way.


SmartPoi-JS-Utilities

In it’s current form, SmartPoi-js-utilities is mostly a very large html/css/js file which takes up 30 000 tokens of an LLM context window. To put that into perspective, DeepSeek has a Total context window or 65 000 tokens, so just one file is using up half.

I had to re-factor (split the file up) in order to be able to make changes with AI without running out of context. I tried claude, gemini and opencode but none of these worked. They all ran out of context while trying to keep all old and new files in memory simultaneously.

Aider-ce is the best?

Aider is my favourite AI coding assistant. Recently, developments in AI coding such as MCP and LSP meant that Aider was falling behind. So the community came together and added those to a forked version (which is currently still compatible with original aider version): aider-ce (Aider Community Experimentation). This version has a very sophisticated method of tool calls which enable very fine grained edits, something not seen in other AI editing packages – as well as MCP integration and access to built-in language packs (rules about how computer languages are supposed to work.)

In addition to the upgrades mentioned above, the new version incorporates a “todo” system which keeps track of progress (adding Agentic behaviour previously lacking in Aider) – sometimes you want the AI to just “get on with it” and complete a bunch of steps – like re-factoring 30 000 token file!

It’s not perfect, a work in progress actually, but with this new /navigator mode fine-tuned editing capabilities in combination with Aider’s already amazing multi-file editing with git integration I think I have the tools to do almost any job. Certainly we are getting much closer to the ultimate tool for AI assisted coding – for people who care about:

  1. Open Source
  2. User Control (you can check every step, /undo at any point)
  3. Choice (choose MCP service, whether to edit whole or just tiny parts of the file, coose LLM)

Thanks to the Devs at Aider and Aider-CE

I can’t name everyone (Dustin Washington is the maintainer of aider-ce, original Aider created and maintained by Paul Gauthier), but their handles are in the commit messages attached to each painstaking line of code. Making my code almost effortless. It’s not about the code after all, it’s about getting a job done. Teamwork and never giving up.

Conclusion:

I should have called this blog post “A peek under the hood” because that is what we are doing here. At the end of the day the users of SmartPoi don’t care HOW the code is produced, just that it works. If I did it right, the fact that I totally re-factored the entire codebase of SmartPoi-js-utilities should be completely impossible to see from the front end. There is effectively no difference, all it means is that it is now easier for me to work on the code.

The point is that I spent a week doing this so we can move forward with better things – and by the way I did the same for the Magic Poi website. I guess you could say that I just suck at JavaScript.. after all, the Python back-end is just perfect, as-is :-p

Notes:

Even with Aider a full re-factor is not smooth sailing. I used a clever trick to fix the problems after completing the upgrade (problems like missing functions, incorrect addressing..)

git diff main -- Combined_APP/main.js > gitdiff.txt

Then just get the AI to reference the diff and search for any missing functions – or if any errors come up at least we have a working code reference point.

I really must look at the possibility of adding this as a tool to aider-ce. Git is already integrated, so it might just be a simple job to include diff checking for re-factor, which after all is a very common thing for developers to have to do.

OpenCode with Chrome Devtools automatically testing Magic Poi website

I said I would be using OpenCode again and today I really gave it a spin. I am impressed!

Check out the video below – where I try out the official Chrome Devtools MCP and get the AI to click around on the website to make sure the filters all work (OK, I know they work it’s just a demo!)

Links:

  1. OpenCode project: https://opencode.ai/
  2. Chrome Devtools: https://developer.chrome.com/blog/chrome-devtools-mcp

If you don’t already, follow along with the progress of Magic Poi on Patreon: https://www.patreon.com/c/CircusScientist

The DeepSeek credits used in the making of this video (and most of the codebase) were paid for by Patreon subscribers. Thanks – especially to Flavio, my biggest supporter. Check out his amazing LED hula hoops and more here: https://www.instagram.com/hoop_roots_lovers/

Re-factoring a large Flask template to accommodate Jinja and AI coding

The problem:

For the Magic Poi project, the online website and api for the IOT devices to connect to (Poi) is all done with Flask. Flask handles the web front and back-end using Jinja, a templating library. In Flask you have templates with placeholder variables like {% image1 %} which will put “image1” into the web page for example, where you need it.

In traditional JavaScript we would have separate .js .css and .html files – and the JavaScript can be separated into pieces – with each part handling a particular section. For example in Magic Poi we have Images, Timelines, Parties, Friends and more. Using Jinja, we can do all of the heavy lifting (and syncing with poi) in Python on the server, and the placeholders make the html simple to tie in to the rest of the code (we just use the same names and logic) but that comes with a limitation: we cannot have a separate .js and .css file if we want to use these placeholders, it’s an unfortunate limitation of Jinja templating.

So that left me with the main “profile” page ending up with thousands of lines of html, css, and js after adding all the features up until now (and we are not nearly done).

Time to re-factor

Re-factoring is the process of moving code around to make it either easier to read or easier to extend and update – or in this case, both. Having more focused sections would make it easier for me to follow the code, but how would having a bunch of smaller files make it easier to extend? Well it turns out that Aider, my preferred AI assistant until recently, has a problem. In order to edit files it needs to send them through to the LLM (in my case, mostly DeepSeek). It sends the whole file – and profile page was up to 30 000 tokens (token, syllable or part of a word). DeepSeek has a maximum of 65 000 tokens, including the response, so I would often run out of tokens just trying to ask about some change I wanted made in the functionality.

Luckily I found a solution, but it wasn’t easy!

First of all, since the Aider tool cannot edit for me and I really don’t have the time to look at thousands of lines of code (multiple JS Classes, with tens of functions each) I turned to another tool – gemini. Gemini is not as capable as DeepSeek, however the official cli tool is free, and most importantly it has a context limit of 1 million tokens. Since the re-factoring was a basic operation I used gemini to do it – after many hours it was done with a web page that didn’t quite work. Like most of AI coding it was just close enough though…


I guess it might help to describe what I was trying to do. Simply put, I needed to instead of having one “profile.html”, have multiple .html files. So the back-end renders profile.html and it has sections like this:

<div id="tab-content-wrapper">
        {% include 'profile_timelines_section.html' %}
        {% include 'profile_my_images_section.html' %}
        {% include 'profile_shared_images_section.html' %}
        {% include 'profile_friends_section.html' %}
        {% include 'profile_parties_section.html' %}
        {% include 'profile_my_poi_section.html' %}
    </div>

We also have .html files included with associated JavaScript (and css). That way, the timelines section consists of 3 files for example (it’s a little bit more complicated like I have some shared parts also but mainly 3 sections for each functional piece). The main thing is that these are a lot shorter than the original profile.html.

Aider wasn’t cutting it

I had recently been looking at the “Community Edition” of Aider. This differentiates itself with the ability to incorporate “MCP” servers (add-ons) which for example can search the web, control the browser for testing, check code documentation and more. A new option in the Aider-CE is the “Navigator” mode which uses search and replace within files instead of the default “send the whole file” functionality.


I used aider-ce to fix up the issues introduced by gemini. Mostly missing or duplicate code. Easy to fix if you just compare the original with the new version.

Conclusion

The website, https://magicpoi.com has the new re-factored code up and running – and after a week of working on re-factoring it now looks and acts exactly the same as before.

I learned that Aider has some serious limitations, but that is being looked at by the community of programmers who use it (and tested by people like me).

And I learned about a new pattern for my Flask/Jinja templates, which really works.

Going forward

I am now using aider-ce instead of main aider branch.

But – I found another open source coding assistant which seems to be built from the ground up with the limitations of LLM’s like DeepSeek in mind. OpenCode. This project has many bells and whistles like MCP extensibility, a kind of “Architect” mode (similar to Aider but it keeps a list of “todo’s” and executes them in order) and more.

Stay tuned for more about that!

DeepSeek browser testing with claude-cli and chrome-devtools-mcp

Recently I installed claude code – without an Anthropic account. I am using DeepSeek api because it’s good, and cheap.

For website updates, it is helpful to have the AI automate some things, to check that the updates work irl. Recently Google launched their official Crome Devtools MCP. I heard it works great with claude cli. Turns out it works great with claude cli hacked to use DeepSeek api also!

Installation

(After installing and setting up claude-code-router and claude cli)

Just run: claude mcp add chrome-devtools -- npx chrome-devtools-mcp@latest

Now you can run the claude-code-router in your coding project directory:

ccr code

and then type /mcp to check – the devtools should be listed.

Running

You can do this in several ways, but I like to run the browser and get the mcp to connect to already running session. On my Arch linux laptop I do this:

google-chrome-stable --remote-debugging-port=9222 --user-data-dir=/home/tom/Downloads/temp_chrome/

You need to specify the default debug port and a directory for your session (if you don’t add a temp directory, the AI might have access to your Chrome session, logins etc. Maybe you want that?). Chrome will think you are starting it for the first time – at least the first time you do this – so click through the prompts.

After Chrome is running, you can ask claude cli (with DeepSeek api back-end) to do stuff in your browser. Just ask it to use chrome-devtools-mcp and it will.

I am hoping to use this a lot, make changes and then get the AI to test everything works, iterate..

Don’t do this

ccr code --dangerously-skip-permissions

– gives the ai full control of your computer and no need to ask permission from you to do whatever it wants. Including finishing your project while you watch anime! (Or deleting everything on your hard drive…).

I hacked Claude Code to work with DeepSeek API

OK I didn’t do it all by myself – I just ran the reverse engineered proxy called claude-code-router which is already set up.

On Arch linux it’s as easy as doing:

yay claude-code 
yay claude-code-router
# and then finally:
ccr code

“ccr” runs claude cli behind a proxy – and routes all requests to your LLM service of choice (even local LLM if you have a powerful enough computer). When it runs for the first time you get to add your api details (in my case DeepSeek api).

You can also install it in other ways – like with npm/npx.

Many people were telling me about how good Claude Code CLI is. Now I can try out Claude Code CLI without paying $$$ – and it works! Even paying Claude Code users are doing this, because they run out of credits (Claude is expensive) and DeepSeek is just as good?

I don’t even have an Anthropic account!

I am still going to be using Aider for my main coding assistant, because I still prefer the control it offers. But for some things having options like MCP integration and proper vibe coding flow is important (like my new take on “Space Invaders” done with Pygame, coming soon!)

Check out the claude-code-router project and see what you think!

PS: Claude Code and Aider share many similarities when it comes to syntax. commands start with “/”, files are addressed with “@”, you can “/clear” to remove chat history from context… I wonder which one came first?

Connect your poi to your phone – instead of connecting the phone to the poi

SmartPoi default setting

Smartpoi by default has main poi with access point enabled. When you want to control the poi you have to connect to it the same way you would connect to a your WiFi Router.

Downsides:

  • Main poi have low power for signal (we have to compete with other WiFi signals, and this actually fails in environments with a lot of WiFi routers around)
  • Main poi are doing a lot of work now – apart from displaying images they have to route signals to Auxillary poi also.
  • No internet – when you connect to the poi from your phone/computer you don’t have access to internet on your phone/computer while connected.

    Upsides?
  • Simple to implement (static IP addresses for both poi, router ip same as main poi..)
  • Full control (no problems with Router blocking WiFi traffic – this does happen!)

Alternatives

Router Mode is fully stable on the D1 mini version of SmartPoi now. You input your router SSID and Password, check the “Router Mode” box and the next time your poi re-start, you are connected.

Difficulties:

  • We need to find the poi IP addresses to upload images and send signals.
  • For discovery we need the Router IP address – usually this is easy to find, though, in WiFi settings of PC or Mobile
  • For JavaScript control interface we have (very technical) security settings that make it more difficult to send signals to IP addresses on your own network from the browser (to stop hackers). The poi have already got code to work around this limitation, though (CORS).
  • You need TWO external devices – a phone for web controls and a Router (usually router needs to be plugged in..)

    Advantages:
  • Better signal (Routers are built for this)
  • Internet!

A middle ground (the point of the article)

The third way is to CONNECT YOUR POI TO YOUR PHONE HOTPOT.

This DOES WORK. However there are some issues I was not aware of until I tried it.

  1. The biggest problem: for some reason it is VERY DIFFICULT on Android to get the HotSpot Gateway IP address (like we got the Router IP address before).
    How do we do this then? Well on the Android command line we can type in:
ip route | grep default

That’s it. Simple right? Only Android does not come with a terminal by default, so it requires the poi spinner to install another app, or set up developer mode.

The other way is to connect to your HotSpot with a PC – and look at the WiFi settings. Then we get the Gateway IP address (mine is 10.211.201.45 – so random!)
So far that’s all I got – not ideal. The good news is your HotSpot Gateway IP does not change (I have read that on some devices it might, though). So once you have it you can:

CONNECT TO HOTSPOT

  • Router Mode on poi – type in the hotspot SSID and Password
  • put in the IP address of the Gateway
  • click on Discover Poi
  • Done!



Now you have a much better connection which is stronger than AP mode, and you still have internet on the poi!

Online Hackathon was a bust

I did learn about Kafka though – apparently the company putting on the hackathon is very interested in using this distributed event streaming platform which is so massive it needs 8GB of RAM just to load up. Not something I can just load up on a VPS then unfortunately – I also learned about RedPanda, the Kafka replacement which was written in C++ instead of Java – which I might want to look at in future.
For now Mosquitto MQTT which I already use has a persistent memory option which is far less heavy on resources than either of these and might actually cover the same use case for smaller projects (in combination with a database)..

Anyway what turned me off of the hackathon was:

  1. The requirement to not use AI assisted coding – this is the future of coding and I don’t do programming without my Aider coding assistant. I wouldn’t do coding without code completion, linter, documentation, testing, and versioning either.
  2. The problems were basic
    – Sort letters in a string (!). If I wanted to do that I would go onto leetcode or somewhere and get my fill of algorithm fiddling, or write one prompt in Aider and get the answer instantly (or just search StackOverflow). What a waste of time, is this really going to find out how good you are at actual programming?*
    – Basic Kafka event pipeline, type the code into a bash prompt – I kid you not, type the Python code into a bash prompt. What about indenting? How does this even..**
    – another algorithm challenge, tldr (I already already left at this point).

I feel sorry for any corporate wannabee programmers who have to deal with this type of thing to get considered. I am also sad that I didn’t have a chance to show my actual skills (not memorising syntax/algorithms) to compete to earn the prize.

Moving on..

*OK I looked up the actual use of “k unique characters” algorithm and it’s legit useful, but I am definitely not looking for a research position, just use the current state-of-the-art algo and move onto dealing with the data?
**I’m guessing this is to try and stop AI input?