I went to work extending that using Claude & Co-Pilot - code like this, way above my level. The power of LLM, of course I'm in control!
This works. I can get some really great data analysis using the MCP now.
Flow:
Run, record on Apple Fitness app. Healthkit automatically imports that workout. Which I can then import into Strava - that last part is manual. Use any MCP enabled app to query, I'm using Claude.
Not the smoothest but works.
https://github.com/JackRegan/strava-mcp-v2
I have some 1/2 marathons planned later in the year so I'll put the 'coach' side of the project to the test for those.
Covid left my wife and I bereft of the gym. Locked in. Allowed out for one hour a day. Bouncing off the walls, we decided to start running laps of Victoria Park.
Let’s be clear. At this point I’m watching half a century on planet Earth approach and, aside from representing my school at county level in the 100/200m, I have avoided running like … well, like Covid.
Fast forward six years and it’s happened. I have developed a shoe problem. “These ones are for speed work.” “These are for racing.”
Special shorts. Calf compression socks. A little head torch for dark winter mornings. I understand what a tempo run is. I find myself trying to steer every conversation towards my last 10K, my placing, or my PB.
Christ, I’m boring.
The one saving grace is my wife continued as well. They have fewer shoes, but at least we can share our dull chat.
What’s interested me for a while, though, is AI/LLM use for training and programme creation and adaptation. Essentially: building a silicon coach.
The goal is simple enough. I want something that can look at what I actually did — pace, heart rate, biometrics — and then help plan and adjust training blocks. Not vibes. Data.
The hard part is not the AI. It’s the plumbing.
I’m fairly locked into the Apple ecosystem. My primary run device is an Apple Watch Ultra 3 (big, which helps my half-century eyes). It’s mostly great. Setting up custom workouts is a bit of a PITA, but Workout Builder https://apps.apple.com/us/app/workout-builder-send-to-watch/id6450721774 solved that for me.
Alerts are hit and miss. I’d really like pace and %HR max alerts — which it can’t do. But once I switched from Spotify to Apple Music, things mostly behaved.
Then there’s the real problem: data.
Getting Apple Health data into anything usable is… grim. The native export is horrible. One enormous ugly file, zero filtering, zero joy.
My first experiment was https://github.com/krumjahn/applehealth - a local LLM setup using Ollama. Credit where it’s due: well put together and genuinely fun to play with and setup.
The problem was always the export. Working repeatedly with that massive Apple Health dump was painful.
I tweaked the code. By “I tweaked the code” I mean GitHub Copilot tweaked the code based on my prompts (humans are still in charge). I modified it to consume HealthFit (https://apps.apple.com/us/app/healthfit/id1202650514) exports directly. An iOS app that can sync HealthKit workouts out to other platforms while preserving far more detail. At this point it felt like I was collecting data plumbing tools rather than actually running, but still — progress.
Still bumpy. Export files to iCloud. Sync them down to a Mac. Rinse and repeat.
And after all that, I discovered the killer: Apple’s export is missing most of the interesting workout data. Pace per km, detailed splits — gone. It’s basically just summaries.
This is a platform designed to consolidate health data across devices. I also own a few other wearables, so this was immediately interesting.
Fulcra works as a phone app with direct access to Apple HealthKit — no exporting required. They have a well-documented API and an MCP server.
Oooooh.
I wired Claude up as a desktop client. It was my first MCP config, so there were a few learning bumps, but once authenticated (which you need to do a lot — damn session timeouts), it worked well.
“Analyse my run on 29.12.2025. Pace, HR, biometrics.”
That felt like the future.
I also used this setup to generate a couple of 10K training blocks. Once I realised Claude is absolutely terrible at knowing what day of the week it is, it actually worked well. Varied sessions, sensible progression, goals achieved.
It did, however, try to get me to kill myself on race day until I pushed back.
So, still very much a coach, not a manager.
The problem with this is API timeouts & throttling. Trying to crunch a lot of data would just fail. Clause conversations lengths are also problematic. Equally Fulcra is £15 per month. And despite a wonderful UI that looks like it should surface all sorts of insights, it’s really just a very pretty gateway to the data I want.
Which led to a rethink.
I already use Strava, mainly for the “look at me and what I did” social side. So: should I switch up and lean on the Strava API instead?
After about half a day of fighting, I got it working. (strava-mcp/book/src/setup-developer-credentials.md is your friend.)
Claude is now plugged into my Strava data.
The remaining issue is familiar: data completeness.
While Apple Workouts do sync into Strava, they’re missing key metrics — cadence, stride length, and other biometrics. So I’m back at the same fork in the road.
At this point the options look like:
Record runs directly in the Strava app (better data, worse watch UX) and my custom runs all go away
Dual record (effective, annoying)
Use HealthFit as a bridge to get richer data into Strava
Given my eyes have finally adapted to the Apple Watch workout screens, I’m going to try the HealthFit bridge route next.
Well, its been a while since I wrote anything here!
Given the huge increase in phone thefts in the UK/London I started reading up on the precautions you can take ahead of any horrible situation.
Phones are stolen for two reasons:
They sell the device on the second hand market. Probably shipped to another country with less strict rules and broken down for parts.
If the phone is unlocked when stolen they will likely try and use your finance apps to steal money, or trick friends and relatives by impersonating you. "Hey [NAME] I am in a pickle and need some money sent"
This post attempts to prepare you, and your phone in case the worst happens.
Enabling/Disabling settings can slow the thief down, buy some to enable theft mode and also prevent the thief enabling airplane mode which stops you tracking, wiping it etc.
Stolen Phone Preparation
Disable the Control Center on an iPhone when it's locked
Disabling Control Center on the lock screen prevents others from quickly accessing your phone's features, such as disabling Wi-Fi or Bluetooth when its locked. The flip side however, is you'll lose quick access to these feature.
Open Settings
Go to Face ID & Passcode or Touch ID & Passcode
Enter your passcode
Scroll to Allow Access When Locked
Find the Control Center toggle and turn it off
Disable control panel access from within an app
To disable access to the Control Center from within apps on an iOS device, go to Settings > Control Center and toggle the "Access Within Apps" option off; this will prevent the Control Center from appearing when you swipe down from the top right corner of the screen while in an app.
Key points:
Access settings: Open the Settings app on your iPhone.
Navigate to Control Center: Select "Control Center" from the settings menu.
Toggle Access Within Apps: Find the "Access Within Apps" option and turn it off.
How does this help? If your phone is unlocked when stolen this buys you some time to enable your theft mode.
Screen Time privacy
To prevent unauthorised changes to settings - faceID, iCloud Accounts.
Screen Time / Restrictions / Content & Privacy Restrictions
Disable accounts
Disable passcode & Face ID
Enforce PIN for changes
go to Settings > Screen Time, then tap change screen time password. Enter numerical PIN and iCloud user and password.
Stolen Device Protection
Settings > Face ID & Passcode, enter your passcode, then scroll down and toggle "Stolen Device Protection". This feature enables apple's security measures to protect your data if your phone is lost or stolen, requiring additional authentication even if someone knows your passcode.
Key points about Stolen Device Protection:
Access: You can find this setting by going to "Settings > Face ID & Passcode".
Activation: Toggle the "Stolen Device Protection" switch to turn it on.
Important features:
Require security delay: This setting can be set to "Always" to enforce additional security measures even in familiar locations.
Biometric authentication: Stolen Device Protection often requires Face ID or Touch ID to access sensitive information.
Secure Applications
If your phone is stolen and unlocked then the thief can use your apps to move money, buy stuff, contact friends and family in order to conduct fraud.
All financial apps / messaging apps / shopping apps - must have Face ID or a second form of authentication which is checked when they are opened.
If they don’t have it natively then use the iOS require Face ID - long press on application icon to enable it.
Password Manager
You should have one! And assuming you do then make sure you have an alternative way to access it without your phone. And check this works.
MFA Manager
As with the password manager I hope you are using multi factor (MFA) on your logins where possible. The app you use to manage those is probably on your phone. Make sure you have alternative ways to access it and backups.
Stolen Phone Automation
It is possible to use the iOS shortcuts application to create a combination of shortcuts, automation and focus to automatically run on events.
I have my phone setup to run a shortcut called Stolen Phone.
This shortcut takes front and back photos, saves them to the roll, and then switches the phones focus to one I created called Lock Screen - Emergency.The Lock Screen pushes a custom wall paper with STOLEN PHONE - CONTACT XXXXXX.
There is an automation that is triggered when it detects the focus switch to emergency.That automation:
Locks the phone
Enables wi-fi
Set airplane mode to OFF
Sets low power mode ON << I may change this as it won’t upload photos to the iCloud
Shows a notification [STOLEN PHONE CONTACT XXXXXX]
Takes a front and back photo and saves to roll (a second set of pictures as belt and braces) << This action doesn’t actually work as currently phone needs to be unlocked to take a picture from the automation function.It does still work from the shortcut
The shortcut running between iWatch and Phone is rubbish.You cant run a shortcut on a device, ie running from the watch will run the tasks on the watch, not on the phone.The only way I can see to navigate this is to run the focus (Lock Screen - Emergency) from the watch, which mirrors on the phone.
I also have another WIP automation that looks for keywords in SMS messages and uses that to trigger the automation. That lets me send an SMS to my phone and the lock screen profile will activate.
It's been rather quiet here for the past few months, for which there's a pretty good reason. In November the organisation I worked for and I parted company, by mutual agreement, and with a redundancy package.
Being so close to Christmas and with the certainty of wanting to take some time off (I'd given myself until Feb/March '16 before starting a job hunt) the decision of what to do loomed, there was no way I'd be able, or be allowed to sit and vegetate for that long. I toyed with a couple of London options, charity work, or perhaps give my Sports Therapy alter ego some space. Or Madagascar!
A few years ago I saw a wildlife program which had a 2-3 min piece on a marine conservation charity called Blue Ventures (BV) who were operating in a remote Southwest corner of Madagascar. I'd filed it under 'things to do but will probably never get the opportunity'. This was the opportunity. Sun, sea, diving, community work. Light bulb moment! 7.5 weeks away from home, light bulb went off. We talked it through and the next day I'm sending deposits, booking flights & making lists, big lists.
I already had my PADI Advanced diving, which is the minimum requirement for being part of the science program, and a ton of dive equipment, but the BV list was huge, and as I found out later very much about diver safety. The inventory also included a seriously scary first aid kit, malaria prevention, nets, drugs and advice that we would be isolated for 6 weeks and need to arrive with 'everything' we could possibly want/need while onsite. I was going to need bigger bags!
LONDON/PARIS/ANTANANARIVO(TANA)
In Tana I'd meet the other volunteers and begin the 4 day drive to Toliara.
Simon & Liwia - Bravely quit their jobs in July '15 and have been working their way through African wildlife projects ever since.
Lucy - A wee bonnie lass taking a gap year from studies who arrived with more medical supplies and sun cream than the rest of us together. Which was actually very useful.
Anninja - Student from Switzerland who could only stay for the first 3 weeks before staring her medical studies.
We'd later meet Pierre, starting his 3rd expedition and Adam, who was travelling from Zimbabwe and would join us on week 3.
The road from Tana took us over some of (what I thought were) the worst roads I'd ever travelled on, little did I know! On the first day we spent 9 hours at the side of the road waiting for a bridge to be re-built. Which they did manage, but it was a stark reminder that this is Africa, shit happens.
On the way we celebrated New Year, hiked in Ranomafana National Park, saw our first Lema (which neither sang nor danced, I think the film may have been factually incorrect) and hiked
Isalo National Park where we swam in the natural pools.
TOLIARA
Our last stop and last chance to buy anything we'd forgotten or anything we might want onsite, jams, sauces, snacks etc
After a few days of R&R we began the last leg of the journey to the village of Andavadoaka(Andava) and the BV site.
Google suggests this is a 3 hour journey. I strongly suggest Google send one of their mapping cars on that journey, with 4x4 rescue vehicles. Its a 9-10 hour off road track that follows the coast, rocky roads, sand dunes and a stomach that would have much rather stayed in the hotel made for a lively journey! It made the Tana/Toliara journey seem like a breeze and I was beginning to have reservations about what I'd got myself into.
ANDAVADOAKA
First day onsite and the remoteness hits. Andava is the largest village in the Velondriake Locally Managed Marine Area (LMMA) with a population of ~1700 people known as Vezo.
Velondriake - "To live with the sea"
The homes are built with wood and there is no running water. Wells provide semi clean water but the Vezo also use the boiled water from cooking rice as a clean source of drinking water. Slightly nutty/popcorn flavour which I developed a taste for, whilst others in the group shunned me. A power line has recently been run through the village, so if you can afford it there is limited electricity. But it's expensive and not the norm. Charcoal stoves and fires are the main method of cooking and the sea the main source of food.
Credit Simon Webber
In comparison our accommodation was palatial! Aligned to the Coco Beach 'hotel', we stayed in 5 huts which can sleep 4 in each, a bathroom, flushing toilets and most of the time salt water running water.
Credit Simon Webber
This was to be home for the next 6 weeks.
DAILY LIFE
The first couple of weeks were very much a settling in period. Getting used to a slower pace of life, 30-35 degrees heat, adapting to a very simple diet of rice, beans and fish; and fighting the inevitable illnesses volunteers tend to get.
We were also assigned duties on a weekly rotation. Handling the water filtration, cleaning up the Bat Cave/Dive hut and weather recording. Water and weather were little and often all day but cleaning tasks were 30 mins in the afternoon and then time to chill.
Each evening before dinner we'd find out what the following day's activities were going to be but generally :
Breakfast 08:00
Dive 09:00 / 11:00 (luxury of a small group meant no 06:00 scheduled dives)
Lunch 13:00
Duties 14:00
Lectures 15:00 - 17:00
Vao Vao & Dinner 19:30. The call to dinner was a chant of Iraika, Roa, Telo, Aleha! 1, 2, 3 Go! ..
Saturday was an enforced no dive day and Sunday was a day to do what we wanted. Which mostly meant nursing hangovers from the previous nights exploits with the local rum.
DIVING & SCIENCE
The diving was my main reason for choosing this adventure, the lure of the sea and the weightlessness underwater is just such a cool experience.
After a pretty comprehensive safety lecture, and after some early bacterial infections had been doused in antibiotics we had our refresher dives. Went through every single PADI skill, not something I've done since training, so a pain, but also a nice refresher.
Once cleared to dive we started benthic or fish id tests. Diving as a small group with one of the field scientists who would point and you take a crack at identifying what they were pointing at. Which needless to say started badly for most of us.
We were split into two groups, fish and benthic. Benthic for the less experienced divers as by the nature of the animal they don't tend to bugger off, but you do need decent buoyancy and Bic the BV instructor taught that exceedingly well. I've seen seasoned 100+ divers with less skills that Lucy and Anninja demonstrated. I and the other already qualified divers got fish. 150 of them to learn for a 50 question computer test and a 30 consecutive correct answers under water test. Until this expedition the first time pass rate was 2. Unfortunately for me I was teamed with Simon and Liwia, AKA the dream team, who doubled the first time pass rate, no pressure on me then.
Over the next few weeks we dived 1 - 2 times a day, some doing science & others training. I eventually passed the tests so joined the science team carrying out fish belts on reefs, a small audit counting fish, identifying species and collecting data which is collated and over time gives an impression of reef health.
COMMUNITY
Part of the BV ethos is that for conservation to succeed it must involve the community. During my stay we spent time on a number of community projects and also spent time with two local families as homestays. We visited the village of Vatoavo, which put Andava to shame with it's remoteness, and poverty, where I got to teach an English class while the other volunteers had arranged an English language treasure hunt for the kids. The village later put on a talent show during which we discovered twerking is apparently a BIG part of Malagasy dance culture, it's all about the arse.
Credit Simon Webber
The homestays were something I wasn't looking forward to, but part of this trip for me was doing things outside my comfort zone. That and dealing with wet sand on feet and fish bones, but they would be handled over time, and under my control.
Lucy and I had dinner with a family in Andava and the following day we would spend 'A day in the life' with the same family. We spent the following day helping them, or in my case hindering them with whatever they would normally be doing. A few of the volunteers wanted to fish, so they did but Lucy and I opted to stay on land which meant I spent the morning playing cards with the children and Lucy got to help with the chores. Although I did attempt to crush some corn, which didn't go down well.
OTHER PROJECTS
BV run a number of other projects in Madagascar, three of which we had an opportunity to get involved with.
AQUA CULTURE
In an effort to provide communities with an alternative income to fishing there are two farming projects, Sea cucumbers and Seaweed. Both of which are backed by companies who provide seed services and a route to market. They provide juvenile sea cucumbers and seaweed plants, the Vezo then nurture them until such time they can harvest and the product sold back to the companies, who then distribute. Cucumbers are used as a filler/bulking agent in Asian markets and seaweed in pretty much anything thats viscous, beauty products, apparently even some ice cream.
We got to take part in one of the periodic sea cucumber harvests. During low tide we helped the farmers collect and weigh cucumbers, anything >300g was catalogued and stored ready for the next morning. Their fate was sealed, evisceration. Cutting a hole in the anus and squeezing the guts out!
I think Lucy might be enjoying that, just a little too much! My single attempt resulted in the poor lady next to me getting covered in, well, sea cucumber bits.
SPIDER TORTOISES
Endemic to Madagascar the Spider Tortoise is critically endangered and in serious decline due to smuggling in the pet and food trade. We spent a day helping the rangers to monitor the population and catalogue the size, weight and age.
Credit Simon Webber
My time in Madagascar has come to an end and I've begun to try and reflect on the experience. It might be too early to really come to any conclusions, but it wasn't a total breeze, and there were times I'd gladly have taken a teleporter out. But I did some very cool things, met some great new friends and have new life experiences which I don't think many people get the opportunity to do.
Now I guess I need to find a job .. or maybe I'll find somewhere for my hammock.
OSX used to contain the binaries to configure ‘dummynet’ from FreeBSD which has the capability to do WAN simulation.
Mavericks no longer has support for dummynet but still has the code in the backend. Find and copy the IPFW binary from an older machine into /sbin and you're good to go.
Example:
Inject 250ms latency and 10% packet loss on connections between workstation and web server (10.0.0.1) and restrict bandwidth to 1 Mbit/s.
# Create 2 pipes and assigned traffic to/from:
$ sudo ipfw add pipe 1 ip from any to 10.0.0.1
$ sudo ipfw add pipe 2 ip from 10.0.0.1 to any
# Configure the pipes we just created with latency & packet loss:
$ sudo ipfw pipe 1 config delay 250ms bw 1Mbit/s plr 0.1
$ sudo ipfw pipe 2 config delay 250ms bw 1Mbit/s plr 0.1
Test:
$ ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: icmp_seq=0 ttl=63 time=515.939 ms
64 bytes from 10.0.0.1: icmp_seq=1 ttl=63 time=519.864 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=63 time=521.785 ms
Request timeout for icmp_seq 3
64 bytes from 10.0.0.1: icmp_seq=4 ttl=63 time=524.461 ms
Disable:
$sudo ipfw list |grep pipe
01900 pipe 1 ip from any to 10.0.0.1 out
02000 pipe 2 ip from 10.0.0.1 to any in
$ sudo ipfw delete 01900
$ sudo ipfw delete 02000
# or, flush all ipfw rules, not just our pipes
$ sudo ipfw -q flush
Round-trip is ~500ms because it applied a 250ms latency to both pipes, incoming and outgoing traffic.
Packet loss is configured with the “plr” command. Valid values are 0 – 1. In our example above we used 0.1 which equals 10% packetloss.
In my spare time I've been building a small Docker lab. I wanted to see what all the fuss is about and also to bring some reality to the kool-aid drinkers in the office.
I've been around long enough to know that theres no magic pill, variations of really good ones have appeared over time but they all need to be mixed with something else.
Docker expands on the Linux LXC built into most kernels from 2.6 which allows a process to exist within its own space within the system. Similar to virtualisation but without the hypervisor and the overhead a hypervisor brings needing to be all things for all people.
Docker allows you to create & package a container. Lets say we have a simple JAVA SMTP service. All the components needed to run that service, Tomcat, code files run within the container, which can be moved, copied to somewhere else and function in exactly the same manner.
Also comes with a registry, either public or private which acts as a repository for Docker images. Now you can easily distribute containers or pass them along the dev pipeline to QA, Ops.
DevOps nirvana! The excitement is palatable!
And yes, if your service is 100% self contained then thats a valid statement.
It's when you start to try and build a bigger solution, and this is probably where my inexperience comes in, that you start to think, and find some of the down sides.
Docker deals with networking within the Docker binary. It serves local DHCP addresses to containers which are then port mapped to the hosts IP. If a container is moved to a new host, its end point changes.
Intra container communication is via tunnels built between them, not via the network.
How do you find a service?
The answer to that question has already been dealt with by others doing true SOA or web scale. Write a service registry, use queuing, load balancers/API, zoo keeper. It's a problem thats been solved by anyone doing dynamic scale but this tweet/blog post:
At its core is a really clever service registry. But also layered with health checks, clustering, multiple locale support that can be queried using an API or via name lookups (DNS) to the Consul service port. Also able to integrate with something like DNSMASQ to redirect queries, this would allow seamless integration into an existing environment where DNS is being used to locate services.
Consul is a small binary, which in my case is within the container or could just as simply exist on the OS that uses a config file to determine what to register with the consul servers. In my lab its using a static config but in reality you would use a CM, puppet, chef, salt, ansible or automatically generate the config using a handy add-on consul-template.
The local consul binary deals with health-checks, nice, immediately a distributed system. The consul servers (min of 3) run in a clustered mode which the local consul agent is aware of so theres registry HA built in.
In summary pretty impressed with consul, it's early days but something to keep an eye on.
But back to Docker.
Docker in itself is not yet a one stop shop, maybe its not supposed to be, but other players are entering the game to add to the package and I think will continue to. Is Docker a death bell to virtualisation? If your a web scale company and all you do is web services then yes, it probably is. For the enterprise or shops that are not developing apps then probably not. But you can of course run Docker on hypervisors.
It also requires the dev teams to shift their model. I know lots of places are SOA and micro services, but lots aren't. Docker to them is not that magic pill.
Something that hadn't occurred to me until I watched this talk AppSec is eating security is the security benefits containerization brings. The host can be a massively cut down OS and each container only contains the bare minimum to run the services. The service also has no state, its IP is dynamic, it has no fixed abode. The attack surface is not only reduced it becomes all slippery. Patching also (in theory, and if you code correctly) a breeze.
But on the flip side :
Docker is potentially a game changer, but not without work and consideration.
CentOS 7.1 behind SQUID Proxy. Docker install using YUM.
docker info
FATA[0000] Get http:///var/run/docker.sock/v1.18/images/search?term=apache: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
I don't particularly need a 2GB trunk from the NAS but a recent switch upgrade to a NetGear managed switch, GS716T (which for £120 is bloody good value) gave me the option.
Set the NAS for bonded 802.3ad & created the LAG group on the switch. Easy.
Two days later I noticed my Windows machines had lost their SMB mounts, Linux boxes all fine. Disabling one of the NAS bonded ports brought it all back.
I suspected some kind of ARP timeout. Switched the LAG port from STATIC to LACP and all was well. And has remained so. No idea why Windows was FUBAR and not Linux and without cracking open Wireshark I can only guess.
Documentation is decent and I had no issues until I started the deploy on the destination. The decompression failed with a PHP error when I ran installer.php. I followed the FAQ and the manual extraction process which worked fine, upload the decompressed files and archive, run the installer.php and follow the prompts.
Site 'duplicated' and working! ... almost. The primary network site worked fine second site down. Changed the url via the WP Network Site Admin and created the equiv subdomain entry which allowed me to browse to the second site. Progress!
Main site : site.domain.uk
Second site : site2dev.domain.uk
Followed a ton of links and tried all the suggestions with no success. Went through the db with a forensic microscope in case a URL rename had been missed, all with no joy. Eventually I decided to create a new site and see what the results were. If it failed I knew it would be more WPMU than the second site setup.
Setting up site3 it was created as site3.domain.domain.uk .. oh a sub sub domain. The light bulb went off but I carried on. Site3 worked fine.
Changed my second site & DNS entry to site3.domain.domain.uk .. Golden.