问题描述
我对 Celery 很陌生,这是我的问题:
I am very new to Celery and here is the question I have:
假设我有一个脚本,它应该不断地从数据库中获取新数据并使用 Celery 将其发送给工作人员.
Suppose I have a script that is constantly supposed to fetch new data from DB and send it to workers using Celery.
tasks.py
# Celery Task
from celery import Celery
app = Celery('tasks', broker='amqp://guest@localhost//')
@app.task
def process_data(x):
# Do something with x
pass
fetch_db.py
fetch_db.py
# Fetch new data from DB and dispatch to workers.
from tasks import process_data
while True:
# Run DB query here to fetch new data from DB fetched_data
process_data.delay(fetched_data)
sleep(30);
我担心的是:每 30 秒获取一次数据.据我所知,process_data() 函数可能需要更长的时间,并且取决于工作人员的数量(特别是如果太少),队列可能会受到限制.
Here is my concern: the data is being fetched every 30 seconds. process_data() function could take much longer and depending on the amount of workers (especially if too few) the queue might get throttled as I understand.
- 我无法增加工人数量.
- 我可以修改代码以避免在队列满时喂食.
问题是如何设置队列大小以及如何知道队列已满?一般情况下,如何处理这种情况?
The question is how do I set queue size and how do I know it is full? In general, how to deal with this situation?
推荐答案
可以设置rabbitmq x-max-length
在队列中使用 海带
示例:
import time
from celery import Celery
from kombu import Queue, Exchange
class Config(object):
BROKER_URL = "amqp://guest@localhost//"
CELERY_QUEUES = (
Queue(
'important',
exchange=Exchange('important'),
routing_key="important",
queue_arguments={'x-max-length': 10}
),
)
app = Celery('tasks')
app.config_from_object(Config)
@app.task(queue='important')
def process_data(x):
pass
或使用政策
rabbitmqctl set_policy Ten "^one-meg$" '{"max-length-bytes":1000000}' --apply-to queues
这篇关于芹菜:如何限制队列中的任务数量并在满时停止喂食?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!