Python多处理同步

细颗粒

我有一个函数“ function”,我想使用2乘5 cpus进行多重处理来调用10次。

因此,我需要一种方法来同步以下代码中描述的过程。

如果不使用多处理池,是否可以做到这一点?如果这样做,我会得到奇怪的错误(例如“ UnboundLocalError:赋值之前引用的本地变量'fd'”(我没有这样的变量))。而且,进程似乎随机终止。

如果可能的话,我想没有游泳池。谢谢!

number_of_cpus = 5
number_of_iterations = 2

# An array for the processes.
processing_jobs = []

# Start 5 processes 2 times.
for iteration in range(0, number_of_iterations):

    # TODO SYNCHRONIZE HERE

    # Start 5 processes at a time.
    for cpu_number in range(0, number_of_cpus):

        # Calculate an offset for the current function call.
        file_offset = iteration * cpu_number * number_of_files_per_process

        p = multiprocessing.Process(target=function, args=(file_offset,))
        processing_jobs.append(p)
        p.start()

    # TODO SYNCHRONIZE HERE

这是在池中运行代码时得到的错误的(匿名)追溯:

Process Process-5:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "python_code_3.py", line 88, in function_x
    xyz = python_code_1.function_y(args)
  File "/python_code_1.py", line 254, in __init__
    self.WK =  file.WK(filename)
  File "/python_code_2.py", line 1754, in __init__
    self.__parse__(name, data, fast_load)
  File "/python_code_2.py", line 1810, in __parse__
    fd.close()
UnboundLocalError: local variable 'fd' referenced before assignment

大多数进程都会像这样崩溃,但并非全部。当我增加进程数量时,它们中的更多似乎崩溃了。我还认为这可能是由于内存限制所致...

损害

不使用池即可执行所需的同步:

import multiprocessing

def function(arg):
    print ("got arg %s" % arg)

if __name__ == "__main__":
    number_of_cpus = 5
    number_of_iterations = 2

    # An array for the processes.
    processing_jobs = []

    # Start 5 processes 2 times.
    for iteration in range(1, number_of_iterations+1):  # Start the range from 1 so we don't multiply by zero.

        # Start 5 processes at a time.
        for cpu_number in range(1, number_of_cpus+1):

            # Calculate an offset for the current function call.
            file_offset = iteration * cpu_number * number_of_files_per_process

            p = multiprocessing.Process(target=function, args=(file_offset,))
            processing_jobs.append(p)
            p.start()

        # Wait for all processes to finish.
        for proc in processing_jobs:
            proc.join()

        # Empty active job list.
        del processing_jobs[:]

        # Write file here
        print("Writing")

这是一个Pool

import multiprocessing

def function(arg):
    print ("got arg %s" % arg)

if __name__ == "__main__":
    number_of_cpus = 5
    number_of_iterations = 2

    pool = multiprocessing.Pool(number_of_cpus)
    for i in range(1, number_of_iterations+1): # Start the range from 1 so we don't multiply by zero
        file_offsets = [number_of_files_per_process * i * cpu_num for cpu_num in range(1, number_of_cpus+1)] 
        pool.map(function, file_offsets)
        print("Writing")
        # Write file here

如您所见,Pool解决方案更好。

但是,这不能解决您的回溯问题。对于我来说很难说如何解决这个问题,而又不了解真正的原因。您可能需要使用multiprocessing.Lock来同步对资源的访问。

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章