问题描述
我正在运行几个 cat |远程服务器上的 zgrep
命令并单独收集其输出以进行进一步处理:
I am running several cat | zgrep
commands on a remote server and gathering their output individually for further processing:
class MainProcessor(mp.Process):
def __init__(self, peaks_array):
super(MainProcessor, self).__init__()
self.peaks_array = peaks_array
def run(self):
for peak_arr in self.peaks_array:
peak_processor = PeakProcessor(peak_arr)
peak_processor.start()
class PeakProcessor(mp.Process):
def __init__(self, peak_arr):
super(PeakProcessor, self).__init__()
self.peak_arr = peak_arr
def run(self):
command = 'ssh remote_host cat files_to_process | zgrep --mmap "regex" '
log_lines = (subprocess.check_output(command, shell=True)).split('
')
process_data(log_lines)
然而,这会导致 subprocess('ssh ... cat ...') 命令的顺序执行.第二个高峰等待第一个完成,依此类推.
This, however, results in sequential execution of the subprocess('ssh ... cat ...') commands. Second peak waits for first to finish and so on.
如何修改此代码以使子进程调用并行运行,同时仍能够单独收集每个子进程的输出?
How can I modify this code so that the subprocess calls run in parallel, while still being able to collect the output for each individually?
推荐答案
另一种方法(而不是其他将 shell 进程置于后台的建议)是使用 多线程.
Another approach (rather than the other suggestion of putting shell processes in the background) is to use multithreading.
您拥有的 run
方法会执行如下操作:
The run
method that you have would then do something like this:
thread.start_new_thread ( myFuncThatDoesZGrep)
要收集结果,您可以执行以下操作:
To collect results, you can do something like this:
class MyThread(threading.Thread):
def run(self):
self.finished = False
# Your code to run the command here.
blahBlah()
# When finished....
self.finished = True
self.results = []
按照上面关于多线程的链接中的说明运行线程.当您的线程对象具有 myThread.finished == True 时,您可以通过 myThread.results 收集结果.
Run the thread as stated above in the link on multithreading. When your thread object has myThread.finished == True, then you can collect the results via myThread.results.
这篇关于Python:并行执行 cat 子进程的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!