Summary of common configuration of cellery [number of workers configured by cellery and maximum number of tasks performed by a single worker]

setting configuration I use:

#!/usr/bin/env python
import random
from kombu import serialization
from kombu import Exchange, Queue
import ansibleService
 
serialization.registry._decoders.pop("application/x-python-serialize")
 
broker_url = ansibleService.getConfig('/etc/ansible/rabbitmq.cfg', 'rabbit', 'broker_url')
celeryMq = ansibleService.getConfig('/etc/ansible/rabbitmq.cfg', 'celerymq', 'celerymq')
 
SECRET_KEY='top-secrity'
CELERY_BROKER_URL = broker_url
CELERY_RESULT_BACKEND = broker_url
CELERY_TASK_RESULT_EXPIRES = 1200
CELERYD_PREFETCH_MULTIPLIER = 4
CELERYD_CONCURRENCY = 1
CELERYD_MAX_TASKS_PER_CHILD = 1
CELERY_TIMEZONE = 'CST'
CELERY_TASK_SERIALIZER='json'
CELERY_ACCEPT_CONTENT=['json']
CELERY_RESULT_SERIALIZER='json'
CELERY_QUEUES = (
    Queue(celeryMq, Exchange(celeryMq), routing_key=celeryMq),
)
CELERY_IGNORE_RESULT = True
CELERY_SEND_EVENTS = False
CELERY_EVENT_QUEUE_EXPIRES = 60

rmq as message queue.

Number of concurrent worker s 25

Each worker will destroy at most one task. (execute full task, process destroy and rebuild, free memory)

# -*- coding:utf-8 -*-                                                                                                                                                  
from datetime import timedelta
from settings import REDIS_HOST, REDIS_PORT, REDIS_PASSWORD, REDIS_DB_NUM
 
 
# If the queue in a program does not exist in the broker, create it immediately
CELERY_CREATE_MISSING_QUEUES = True
 
CELERY_IMPORTS = ("async_task.tasks", "async_task.notify")
 
# Using redis as task queue
BROKER_URL = 'redis://:' + REDIS_PASSWORD + '@' + REDIS_HOST + ':' + str(REDIS_PORT) + '/' + str(REDIS_DB_NUM)
 
#CELERY_RESULT_BACKEND = 'redis://:' + REDIS_PASSWORD + '@' + REDIS_HOST + ':' + str(REDIS_PORT) + '/10'
 
CELERYD_CONCURRENCY = 20  # Number of concurrent worker s
 
CELERY_TIMEZONE = 'Asia/Shanghai'
 
CELERYD_FORCE_EXECV = True    # It is very important to prevent deadlock in some cases
 
CELERYD_PREFETCH_MULTIPLIER = 1
 
CELERYD_MAX_TASKS_PER_CHILD = 100    # At most 100 tasks per worker will be destroyed to prevent memory leakage
# CELERYD_TASK_TIME_LIMIT = 60    # The running time of a single task shall not exceed this value, otherwise it will be killed by SIGKILL signal 
# BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 90}
# After the task is sent out, the task is handed over to other worker s for execution after a period of time without receiving the acknowledge
CELERY_DISABLE_RATE_LIMITS = True   
 
# Timing task
CELERYBEAT_SCHEDULE = {
    'msg_notify': {
        'task': 'async_task.notify.msg_notify',
        'schedule': timedelta(seconds=10),
        #'args': (redis_db),
        'options' : {'queue':'my_period_task'}
    },
    'report_result': {
        'task': 'async_task.tasks.report_result',
        'schedule': timedelta(seconds=10),
      #'args': (redis_db),
        'options' : {'queue':'my_period_task'}
    },
    #'report_retry': {
    #    'task': 'async_task.tasks.report_retry',
    #    'schedule': timedelta(seconds=60),
    #    'options' : {'queue':'my_period_task'}
    #},
 
}
################################################
# Command to start worker
# ***Timer***
# nohup celery beat -s /var/log/boas/celerybeat-schedule  --logfile=/var/log/boas/celerybeat.log  -l info &
# *** worker ***
# nohup celery worker -f /var/log/boas/boas_celery.log -l INFO &

The above is a summary of my work.

At the same time, other things need to be explained

CELERYD_TASK_TIME_LIMIT

BROKER_TRANSPORT_OPTIONS

You need to be very careful when using. If you set the task time limit too small, the worker will be killed before the task is completed. If you set the broker transport options too small, the task may be repeatedly executed.

Tags: Programming JSON Redis Python ansible

Posted on Fri, 08 Nov 2019 12:31:05 -0500 by Paddy