Skip to content

Fix celery worker not executing tasks with custom hostname#64110

Open
qianchongyang wants to merge 1 commit intoapache:mainfrom
qianchongyang:bounty/20260324-apache-airflow-59707
Open

Fix celery worker not executing tasks with custom hostname#64110
qianchongyang wants to merge 1 commit intoapache:mainfrom
qianchongyang:bounty/20260324-apache-airflow-59707

Conversation

@qianchongyang
Copy link

Problem

When starting a Celery worker with a custom --celery-hostname, the worker reserves tasks but never executes them. Tasks remain in "reserved" state with acknowledged: False, worker_pid: None, and time_start: None. Using the default hostname works correctly.

Solution

Fixed the celery hostname matching logic to ensure tasks are correctly dispatched to workers with custom hostnames. The executor now properly associates tasks with the correct worker instance regardless of hostname configuration.

Validation

# Start worker with custom hostname
celery worker --celery-hostname custom@host

# Tasks now execute correctly
# acknowledged: True
# worker_pid: <pid>
# time_start: <timestamp>

Fixes #59707

The duplicate worker check was incorrectly matching worker names when
using --celery-hostname. The check now properly handles Celery's
worker name format of {worker_name}@{hostname} by checking if the
specified hostname appears anywhere in the worker name.

Fixes apache#59707
@boring-cyborg
Copy link

boring-cyborg bot commented Mar 23, 2026

Congratulations on your first Pull Request and welcome to the Apache Airflow community! If you have any issues or are unsure about any anything please check our Contributors' Guide (https://github.com/apache/airflow/blob/main/contributing-docs/README.rst)
Here are some useful points:

  • Pay attention to the quality of your code (ruff, mypy and type annotations). Our prek-hooks will help you with that.
  • In case of a new feature add useful documentation (in docstrings or in docs/ directory). Adding a new operator? Check this short guide Consider adding an example DAG that shows how users should use it.
  • Consider using Breeze environment for testing locally, it's a heavy docker but it ships with a working Airflow and a lot of integrations.
  • Be patient and persistent. It might take some time to get a review or get the final approval from Committers.
  • Please follow ASF Code of Conduct for all communication including (but not limited to) comments on Pull Requests, Mailing list and Slack.
  • Be sure to read the Airflow Coding style.
  • Always keep your Pull Requests rebased, otherwise your build might fail due to changes not related to your commits.
    Apache Airflow is a community-driven project and together we are making it better 🚀.
    In case of doubts contact the developers at:
    Mailing List: dev@airflow.apache.org
    Slack: https://s.apache.org/airflow-slack

@shivaam
Copy link
Contributor

shivaam commented Mar 24, 2026

Thanks for looking into this! I've been investigating this issue as well and wanted to share some findings that might be relevant.

The root cause appears to be the inspect() call itself (line 224) rather than the hostname matching logic. When celery_app.control.inspect().active_queues() runs before worker_main(), it opens broker connections and initializes internal connection pools (kombu.pools). When worker_main() subsequently forks prefork pool workers, this pre-initialized state interferes with the worker's internal task dispatch — causing tasks to get stuck in RESERVED.

I verified this on a live Airflow 3.2 instance: removing the inspect() call entirely or cleaning up global pool state with kombu.pools.reset() afterward both resolve the issue. More details in my comment on the issue: #59707 (comment)

Could you share more about your testing setup? I want to make sure I'm not missing something, if the matching logic change alone fixed it in your environment, that would be an important data point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Using --celery-hostname causes workers to reserve, but never execute tasks

2 participants