Coming soon in #Fedify 1.5.0: Smart fan-out for efficient activity delivery!
After getting feedback about our queue design, we're excited to introduce a significant improvement for accounts with large follower counts.
As we discussed in our previous post, Fedify currently creates separate queue messages for each recipient. While this approach offers excellent reliability and individual retry capabilities, it causes performance issues when sending activities to thousands of followers.
Our solution? A new two-stage “fan-out” approach:
- When you call
Context.sendActivity()
, we'll now enqueue just one consolidated message containing your activity payload and recipient list - A background worker then processes this message and re-enqueues individual delivery tasks
The benefits are substantial:
Context.sendActivity()
returns almost instantly, even for massive follower counts- Memory usage is dramatically reduced by avoiding payload duplication
- UI responsiveness improves since web requests complete quickly
- The same reliability for individual deliveries is maintained
For developers with specific needs, we're adding a fanout
option with three settings:
"auto"
(default): Uses fanout for large recipient lists, direct delivery for small ones"skip"
: Bypasses fanout when you need different payload per recipient"force"
: Always uses fanout even with few recipients
// Example with custom fanout setting
await ctx.sendActivity(
{ identifier: "alice" },
recipients,
activity,
{ fanout: "skip" } // Directly enqueues individual messages
);
This change represents months of performance testing and should make Fedify work beautifully even for extremely popular accounts!
For more details, check out our docs.
What other #performance optimizations would you like to see in future Fedify releases?
#ActivityPub #fedidev

@fedify@hollo.social
Got an interesting question today about #Fedify's outgoing #queue design!
Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.
In the #fediverse, server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.
By creating individual queue items for each recipient:
- Fast servers get messages delivered promptly
- Slow servers don't delay delivery to others
- Failed deliveries can be retried independently
- Your UI remains responsive while deliveries happen in the background
It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.
This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!
What other aspects of Fedify's design would you like to hear about? Let us know!
#ActivityPub #fedidev

ALT text details
A flowchart comparing two approaches to message queue design. The top half shows “Fedify's Current Approach” where a single sendActivity call creates separate messages for each recipient, which are individually queued and processed independently. This results in fast delivery to working recipients while slow servers only affect their own delivery. The bottom half shows an “Alternative Approach” where sendActivity creates a single message with multiple recipients, queued as one item, and processed sequentially. This results in all recipients waiting for each delivery to complete, with slow servers blocking everyone in the queue.