multiprocessing.pool.map 和带有两个参数的函数

multiprocessing.pool.map and function with two arguments(multiprocessing.pool.map 和带有两个参数的函数)
本文介绍了multiprocessing.pool.map 和带有两个参数的函数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

我正在使用 multiprocessing.Pool()

这是我想要的池:

def insert_and_process(file_to_process,db):
    db = DAL("path_to_mysql" + db)
    #Table Definations
    db.table.insert(**parse_file(file_to_process))
    return True

if __name__=="__main__":
    file_list=os.listdir(".")
    P = Pool(processes=4)
    P.map(insert_and_process,file_list,db) # here having problem.

我想传递 2 个参数我想要做的是只初始化 4 个 DB 连接(这里将尝试在每个函数调用上创建连接,因此可能有数百万个连接并导致 IO 冻结死亡).如果我可以为每个进程创建 4 个 db 连接和 1 个就可以了.

I want to pass 2 arguments What i want to do is to initialize only 4 DB connections (here will try to create connection on every function call so possibly millions of them and cause IO Freezed to death) . if i can create 4 db connections and 1 for each processes it will be ok.

Pool 有什么解决方案吗?还是我应该放弃它?

Is there any solution for Pool ? or should i abandon it ?

在你们俩的帮助下,我做到了:

From help of both of you i got this by doing this:

args=zip(f,cycle(dbs))
Out[-]: 
[('f1', 'db1'),
 ('f2', 'db2'),
 ('f3', 'db3'),
 ('f4', 'db4'),
 ('f5', 'db1'),
 ('f6', 'db2'),
 ('f7', 'db3'),
 ('f8', 'db4'),
 ('f9', 'db1'),
 ('f10', 'db2'),
 ('f11', 'db3'),
 ('f12', 'db4')]

所以这里是它的工作原理,我要将数据库连接代码移到主级别并执行此操作:

So here it how it gonna work , i gonna move DB connection code out to the main level and do this:

def process_and_insert(args):

    #Table Definations
    args[1].table.insert(**parse_file(args[0]))
    return True

if __name__=="__main__":
    file_list=os.listdir(".")
    P = Pool(processes=4)

    dbs = [DAL("path_to_mysql/database") for i in range(0,3)]
    args=zip(file_list,cycle(dbs))
    P.map(insert_and_process,args) # here having problem.

是的,我要测试一下,然后告诉你们.

Yeah , i going to test it out and let you guys know.

推荐答案

Pool 文档没有说明将多个参数传递给目标函数的方法 - 我试过只是传递一个序列,但不会展开(每个参数的序列中的一项).

The Pool documentation does not say of a way of passing more than one parameter to the target function - I've tried just passing a sequence, but does not get unfolded (one item of the sequence for each parameter).

但是,您可以编写目标函数以期望第一个(也是唯一的)参数是一个元组,其中每个元素都是您期望的参数之一:

However, you can write your target function to expect the first (and only) parameter to be a tuple, in which each element is one of the parameters you are expecting:

from itertools import repeat

def insert_and_process((file_to_process,db)):
    db = DAL("path_to_mysql" + db)
    #Table Definations
    db.table.insert(**parse_file(file_to_process))
    return True

if __name__=="__main__":
    file_list=os.listdir(".")
    P = Pool(processes=4)
    P.map(insert_and_process,zip(file_list,repeat(db))) 

(请注意 insert_and_process 定义中的额外括号 - python 将其视为应为 2 项序列的单个参数.序列的第一个元素归因于第一个变量,另一个到第二个)

(note the extra parentheses in the definition of insert_and_process - python treat that as a single parameter that should be a 2-item sequence. The first element of the sequence is attributed to the first variable, and the other to the second)

这篇关于multiprocessing.pool.map 和带有两个参数的函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

相关文档推荐

build conda package from local python package(从本地 python 包构建 conda 包)
How can I see all packages that depend on a certain package with PIP?(如何使用 PIP 查看依赖于某个包的所有包?)
How to organize multiple python files into a single module without it behaving like a package?(如何将多个 python 文件组织到一个模块中而不像一个包一样?)
Check if requirements are up to date(检查要求是否是最新的)
How to upload new versions of project to PyPI with twine?(如何使用 twine 将新版本的项目上传到 PyPI?)
Why #egg=foo when pip-installing from git repo(为什么从 git repo 进行 pip 安装时 #egg=foo)