Skip to content

Implement token bucket duty cycle enforcement#1297

Merged
ripplebiz merged 3 commits intomeshcore-dev:devfrom
ViezeVingertjes:feature/duty-cycle-token-bucket
Mar 8, 2026
Merged

Implement token bucket duty cycle enforcement#1297
ripplebiz merged 3 commits intomeshcore-dev:devfrom
ViezeVingertjes:feature/duty-cycle-token-bucket

Conversation

@ViezeVingertjes
Copy link
Contributor

@ViezeVingertjes ViezeVingertjes commented Dec 31, 2025

Replaces per-transmission delay with a rolling window token bucket for better regulatory compliance.

Resolves #817.

@ViezeVingertjes
Copy link
Contributor Author

We had a discussion in the Dutch community today. Some people didn't care much, others wanted a proper implementation. I'm neutral either way (not even sure if I enabled it myself). Perhaps this could start a broader discussion to decide it once and for all.

@ViezeVingertjes ViezeVingertjes marked this pull request as ready for review December 31, 2025 17:46
@LitBomb
Copy link
Contributor

LitBomb commented Dec 31, 2025

how does this work for USA where there is no duty cycle limit?

@ViezeVingertjes
Copy link
Contributor Author

ViezeVingertjes commented Dec 31, 2025

how does this work for USA where there is no duty cycle limit?

The same as it is currently, airtime factor already limits it to the same amount, just less reliable. So both now and with this method, they would need to disable it. (set af 0)

@LitBomb
Copy link
Contributor

LitBomb commented Dec 31, 2025

whatever the change is, it should not limit duty cycle at any percentage for regions that don't have duty cycle limit.

@ViezeVingertjes
Copy link
Contributor Author

ViezeVingertjes commented Dec 31, 2025

whatever the change is, it should not limit duty cycle at any percentage for regions that don't have duty cycle limit.

Its probably higher than the current cap you guys have. But a valid point for another issue on if and what the default should be. Perhaps it should be set with the region or something.

@recrof
Copy link
Collaborator

recrof commented Jan 4, 2026

@ViezeVingertjes I welcome proper implemetation of hourly duty cycle for europe, but this needs to be set to 0 by default. we don't want US, AU, NZ and other countries be limited to 10% by default.

@ViezeVingertjes
Copy link
Contributor Author

ViezeVingertjes commented Jan 4, 2026

@ViezeVingertjes I welcome proper implementation of hourly duty cycle for Europe, but this needs to be set to 0 by default. we don't want US, AU, NZ and other countries be limited to 10% by default.

Hmm, but default AF was 1.0 right? So i feel 10% would reflect the original 'duty cycle' more than 0%?
But disabled would be fine with me too though: eec5ef0

In practice it should already transmit more than with the original AF, so if anything, it shouldn't be less after this change.

@recrof
Copy link
Collaborator

recrof commented Jan 4, 2026

Hmm, but default AF was 1.0 right? So i feel 10% would reflect the original 'duty cycle' more than 0%? But disabled would be fine with me too though: eec5ef0

yes, you are right. AF=0 can be dangerous.. AF=1 is optimal, sorry for that.

@ViezeVingertjes
Copy link
Contributor Author

Hmm, but default AF was 1.0 right? So i feel 10% would reflect the original 'duty cycle' more than 0%? But disabled would be fine with me too though: eec5ef0

yes, you are right. AF=0 can be dangerous.. AF=1 is optimal, sorry for that.

Haha check! thought maybe my thought train was wrong, reverted it -> 1b959a9

@recrof
Copy link
Collaborator

recrof commented Jan 4, 2026

Haha check! thought maybe my thought train was wrong, reverted it -> 1b959a9

you reverted it to 9 tho? we need it at 1

@ViezeVingertjes ViezeVingertjes force-pushed the feature/duty-cycle-token-bucket branch from 00de1a2 to a43b0e5 Compare January 4, 2026 19:07
@ViezeVingertjes
Copy link
Contributor Author

Haha check! thought maybe my thought train was wrong, reverted it -> 1b959a9

you reverted it to 9 tho? we need it at 1

Yes; i went off from numbers i saw on my repeaters which maxed out on 9%, but there are more things at play on there. Updated it to 1.0 and cleaned up the commits. a43b0e5

@fschrempf
Copy link
Contributor

@ViezeVingertjes You can simply drop the last two commits, right? First you change it from 1 to 9 and then back from 9 to 1. Except for the simple_secure_chat which was at 2 for some reason at might want to be aligned with the others.

@ViezeVingertjes ViezeVingertjes force-pushed the feature/duty-cycle-token-bucket branch from a43b0e5 to eb4fa03 Compare January 4, 2026 20:33
@ViezeVingertjes
Copy link
Contributor Author

@ViezeVingertjes You can simply drop the last two commits, right? First you change it from 1 to 9 and then back from 9 to 1. Except for the simple_secure_chat which was at 2 for some reason at might want to be aligned with the others.

Done.

@LitBomb
Copy link
Contributor

LitBomb commented Jan 5, 2026

how does a user know when they exceed the air time factor limit? just a failed send on the radio? any return error code to the clients so the clients can put up error messages to let the users know?

@ViezeVingertjes
Copy link
Contributor Author

how does a user know when they exceed the air time factor limit? just a failed send on the radio? any return error code to the clients so the clients can put up error messages to let the users know?

Nothing changed on that behavior; the new calculation allows for bursts if there is budget, otherwise it will act like the 'old' AF waits, as its a sliding window. So its much more responsive while still utilizing the queue etc like it did before. I am not aware of any error code that is returned or feedback.

Obviously you wont notice a difference when there is no duty cycle applicable in your country (or you have it disabled anyway like most people), but on the default it feels like it responds pretty much instantly most of the time, until the budget is depleted, then it feels like it did in the current release.

@marcelverdult
Copy link
Contributor

thanks so much for implementing my request!

@marcelverdult
Copy link
Contributor

@liamcottle @andymux this would be quiete important for countries with strict dutycycle limits. the current implementation with the waits throws away valuable tx time.

mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Jan 10, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Jan 13, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Jan 14, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Jan 26, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Jan 26, 2026
@marcelverdult
Copy link
Contributor

@liamcottle this one would be very important for europe as well

@recrof
Copy link
Collaborator

recrof commented Jan 27, 2026

@ripplebiz are there any roadblocks preventing merging this request?

mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Jan 28, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Feb 21, 2026
@weebl2000
Copy link
Contributor

Running this for a while now, I feel like it's a no-brainer. Better token bucket in practice and adhering to duty cycle more accurately.

mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Feb 28, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 3, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 4, 2026
@weebl2000
Copy link
Contributor

weebl2000 commented Mar 5, 2026

@liamcottle Let's merge this? Some arguments below:

  1. current duty cycle implementation is fundamentally broken for EU compliance
    The per-packet delay approach doesn't actually enforce a duty cycle — it enforces a spacing. If a node is idle for 50 minutes then sends a burst, the delay model has no memory of that idle time. Conversely, after a single short packet, it imposes an unnecessarily long silence even though the node is nowhere near the hourly limit. The token bucket correctly models the regulatory requirement: X% of airtime within a rolling window.

  2. huge throughput improvement for bursty workloads, i.e. room servers/repeaters
    The current model forces AF × airtime silence after every packet, even if the node has been idle for an hour. The token bucket lets nodes use their accumulated budget for bursts, then naturally throttle once depleted. User @JvM-nl confirmed: "With this I got almost all the time a heard in flood. Before I needed a few re-sends."

  3. code is well-reviewed and field-tested, I've been running this for 2 months as have many others and it's good to go.

  4. for anyone who doesn´t care about duty cycle nothing really changes. set af 0 still disables enforcement altogether. Existing CLI is unchanged

  5. token bucket is the textbook algorithm for rate limiting. The current per-packet delay is a bad approximiation that over-restricts (blocking bursts) and under-enforces (no actual rolling window...)

  6. the changes are contained, literally all the logic changes are handled in Dispatcher, there's no new dependencies, no wire format changes, no client protocol changes, fully backwards compatible. Only thing new is getReminaingTxBudget() and it's additive to the rest

  7. this PR has been open for 2+ months without any blocking objections, all review commentary was addressed.

tl;dr this replaces a fundamentally incorrect duty cycle model with the standard tocken bucket algorithm that actually enforces the regulatory requirements and actually improves throughput for bursty workloads while being more compliant, not less. The diff is tiny at +71, -14 LOC.

Please let's merge this, it's time.

@liamcottle
Copy link
Member

@liamcottle Let's merge this? Some arguments below:

This will also need review from @ripplebiz, as he's the core maintainer of the Dispatcher system.

We are looking to do a v1.14.0 release later tonight, and I don't want to merge any new PRs that have a large footprint right now. Dev branch seems pretty stable right now.

One thing I'd like to see, is a simpler command, such as set dutycycle 10 for 10%, and set dutycycle 50 for 50%.

The airtime factor calculations have always confused me, and when I initially added support for it in the app, I removed it because it didn't make sense in the user interface. Having a cleaner command make the user experience nicer.

We can probably look at this properly after v1.14.0 is released.

@weebl2000
Copy link
Contributor

My 2 cents would be to merge it into v1.14.0😋. I'm really confident it's only going to improve the mesh as a whole.

@4np
Copy link

4np commented Mar 5, 2026

Let me chip in another 2 cents that I agree with @liamcottle that a dedicated set dutycycle x would make sense as set af is confusing, but that can be implemented later. It would be good to have this merged before v1.14.0 is tagged as it would, as @weebl2000 explained, solve a number of existing issues that are probably the source of our (± 3212 node) Dutch mesh (and other large meshes?) going on its knees.

Right now, often you need to resend messages. As people will think they are not heard they will use #bot and/or #test to see if they are heard, causing massive unnecessary chatter from several bots causing a lot of mesh overhead. If users see their messages get repeated, they are less likely to 'test', which in turn alleviates the mesh.

mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 5, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 5, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 5, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 5, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 5, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 5, 2026
@fschrempf
Copy link
Contributor

Now that 1.14 is out, I would suggest to merge this rather sooner than later. This way we can get additional test coverage from people using dev until the next release is getting tagged.

@recrof
Copy link
Collaborator

recrof commented Mar 6, 2026

Now that 1.14 is out, I would suggest to merge this rather sooner than later. This way we can get additional test coverage from people using dev until the next release is getting tagged.

agree. @liamcottle @ripplebiz, can we make it happen please?

mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 6, 2026
@liamcottle
Copy link
Member

liamcottle commented Mar 7, 2026

If this one can't be resolved before merging;

One thing I'd like to see, is a simpler command, such as set dutycycle 10 for 10%, and set dutycycle 50 for 50%.

Can someone please provide a map/list of airtime factor values and how they map to duty cycle values.

Unless the user experience around this can be improved, it's unlikely I would add UI to the app. Users should be working with duty cycle numbers, since that's what's described in radio regulations etc.

Regarding merging, I'd like for @ripplebiz to review the code first. I'll link this to him in a DM as well.

@weebl2000
Copy link
Contributor

If this one can't be resolved before merging;

One thing I'd like to see, is a simpler command, such as set dutycycle 10 for 10%, and set dutycycle 50 for 50%.

Can someone please provide a map/list of airtime factor values and how they map to duty cycle values.

Unless the user experience around this can be improved, it's unlikely I would add UI to the app. Users should be working with duty cycle numbers, since that's what's described in radio regulations etc.

Regarding merging, I'd like for @ripplebiz to review the code first. I'll link this to him in a DM as well.

I think it's good to replace af with dutycycle, but let's do it in a separate PR. This one has been open for 2 months and is good to go as it is, if we start changing things we need to test it all over.

image

@recrof
Copy link
Collaborator

recrof commented Mar 7, 2026

If this one can't be resolved before merging;

One thing I'd like to see, is a simpler command, such as set dutycycle 10 for 10%, and set dutycycle 50 for 50%.

Can someone please provide a map/list of airtime factor values and how they map to duty cycle values.

af vs duty cycle examples:
af 1: dc 50%
af 2: dc 33%
af 3: dc 25%
...
af 9: dc 10%
dc = 100/(af+1)

@weebl2000
Copy link
Contributor

I can make a PR to add dutycycle command and deprecate af.

@jbrazio
Copy link
Contributor

jbrazio commented Mar 7, 2026

I would merge this as it is.
In another PR we can then handle the different behavior for the new command because it's not only the logic that will be reversed i.e. af 9 vs dc 0.1 or dc 10 but it would be great for the app support it under the radio settings and radio profiles.

@recrof
Copy link
Collaborator

recrof commented Mar 7, 2026

I'm also for merging this one, as it silently fixes all repeaters that have af=9 and get performance hit because how it's implemented

@weebl2000
Copy link
Contributor

See #1961 which builds on this PR and adds get/set dutycycle command.

@dreirund
Copy link
Contributor

dreirund commented Mar 7, 2026

One thing I'd like to see, is a simpler command, such as set dutycycle 10 for 10%, and set dutycycle 50 for 50%.

The airtime factor calculations have always confused me

Thinking of the "Airtime Factor" as the amount of time-units to be "silent" I can grasp it intuitively:

E.g. when I am only allowed to send 20% of the time, i.e. only 1/5th, 1 time unit I am allowed to send and the 4 remaining I am not allowed to send, to think of it. So, af=4.

Regards!

mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 8, 2026
mattzzw added a commit to mattzzw/MeshCore-Evo that referenced this pull request Mar 8, 2026
@ripplebiz ripplebiz merged commit cf0cc85 into meshcore-dev:dev Mar 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.