现象
某业务python进程重启或者停止时,该进程有概率会卡住。用ps查看时,可以看到该进程的父进程是1号进程
排查过程
1. 确认进程卡住
从进程状态来看,也不像是僵尸进程,那么唯一的可能就是进程卡住了,使用strace查看时验证了这一点:进程卡在了futex系统调用。
> strace -f -p 1623757
streace: Process 1623757 attached
futex(0x560f8a250fb0, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY)
2. 使用gdb定位原因
gdb可以直接调试python进程,需要apt安装以下依赖库
- python2.7-dbg
- python2.7-dev
使用以下命令开启进程调试
gdb python 1623757
- 使用py-bt打印python调用栈,可以看到进程只有一个线程,且卡在logging模块获取锁这里
(gdb) info thread
Id Target Id Frame
* 1 Thread 0x7fe3b34ee740 (LWP 1623757) "[program_name]" futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x560f8a250fb0) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
(gdb) thread 1
[Switching to thread 1 (Thread 0x7fe3b34ee740 (LWP 1623757))]
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x560f8a250fb0) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
205 in ../sysdeps/unix/sysv/linux/futex-internal.h
(gdb) py-bt
Traceback (most recent call first):
Waiting for the GIL
(frame information optimized out)
File "/usr/lib/python2.7/logging/__init__.py", line 715, in acquire
self.lock.acquire()
File "/usr/lib/python2.7/logging/__init__.py", line 764, in handle
self.acquire()
File "/usr/lib/python2.7/logging/__init__.py", line 1343, in callHandlers
hdlr.handle(record)
File "/usr/lib/python2.7/logging/__init__.py", line 1303, in handle
self.callHandlers(record)
(frame information optimized out)
(frame information optimized out)
...
使用bt打印c调用栈,这里内容比较多做了截取,我们只需要关注最上层的调用即可。这里可以看到 #4 这里调用了lock_PyThread_acquire_lock
方法后,实际上调用了futex_abstimed_wait_cancelable
也就是futex的系统调用,这跟用strace看到的系统调用一致,说明进程卡在了获取锁这里,也就是同步问题。
(gdb) bt
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x560f8a250fb0) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1 do_futex_wait (sem=sem@entry=0x560f8a250fb0, abstime=0x0) at sem_waitcommon.c:111
#2 0x00007fe3b3a6f988 in __new_sem_wait_slow (sem=0x560f8a250fb0, abstime=0x0) at sem_waitcommon.c:181
#3 0x0000560f8846a806 in PyThread_acquire_lock (waitflag=1, lock=0x560f8a250fb0) at ../Python/thread_pthread.h:356
#4 lock_PyThread_acquire_lock (self=0x7fe3b1927390, args=<optimized out>) at ../Modules/threadmodule.c:52
#5 0x0000560f883ccdaa in call_function (oparg=<optimized out>, pp_stack=0x7ffc9fabec88) at ../Python/ceval.c:4376
#6 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3013
#7 0x0000560f883ca866 in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kws=<optimized out>, kwcount=<optimized out>, defs=<optimized out>,
defcount=<optimized out>, closure=<optimized out>) at ../Python/ceval.c:3608
#8 0x0000560f883d264e in fast_function (nk=0, na=1, n=<optimized out>, pp_stack=0x7ffc9fabee58, func=<optimized out>) at ../Python/ceval.c:4471
#9 call_function (oparg=<optimized out>, pp_stack=0x7ffc9fabee58) at ../Python/ceval.c:4396
#10 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3013
#11 0x0000560f883d23ba in fast_function (nk=<optimized out>, na=<optimized out>, n=1, pp_stack=0x7ffc9fabef68, func=<optimized out>) at ../Python/ceval.c:4461
#12 call_function (oparg=<optimized out>, pp_stack=0x7ffc9fabef68) at ../Python/ceval.c:4396
#13 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3013
#14 0x0000560f883d23ba in fast_function (nk=<optimized out>, na=<optimized out>, n=2, pp_stack=0x7ffc9fabf078, func=<optimized out>) at ../Python/ceval.c:4461
#15 call_function (oparg=<optimized out>, pp_stack=0x7ffc9fabf078) at ../Python/ceval.c:4396
#16 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3013
#17 0x0000560f883d23ba in fast_function (nk=<optimized out>, na=<optimized out>, n=2, pp_stack=0x7ffc9fabf188, func=<optimized out>) at ../Python/ceval.c:4461
那么问题来了,logging模块为什么要获取锁,锁又是被谁占有一直不释放,导致陷入了类似死锁的情况?
通过查阅资料可得知这是python的一个bug,在混用多线程和多进程时,子进程会有概率出现死锁。
当父进程fork出子进程时,会把父进程的logging模块中的锁也复制了一份,如果这个锁在复制时刚好是被占用状态,那么子进程的锁也同样会是这个状态,即使父进程释放了被占用的锁,子进程也无法感知到,因为此时已经是在不同的进程下了。
复现脚本
正常情况下,以下脚本运行后会结束退出,但由于上述bug存在,子进程有概率卡住,而父进程等待子进程退出也会一直阻塞住。
import multiprocessing, threading, logging, sys, traceback
import time
from logging import StreamHandler
SLEEP_TIME = 0.000001
logger = logging.getLogger()
ch = logging.StreamHandler()
ch.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(funcName)s(): %(message)s'))
ch.setLevel(logging.DEBUG)
logger.setLevel(logging.DEBUG)
logger.addHandler(ch)
class LoggingThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
while True:
logger.debug('LoggingThread: {}'.format(self))
time.sleep(SLEEP_TIME)
def action(i):
logger.debug('action: {}'.format(i))
def do_parallel_job():
processes = multiprocessing.cpu_count()
pool = multiprocessing.Pool(processes=processes)
for i in range(20):
pool.apply_async(action, args=(i,))
pool.close()
pool.join()
if __name__ == '__main__':
logger.debug('START')
#
# multithread part
#
for _ in range(10):
lt = LoggingThread()
lt.setDaemon(True)
lt.start()
#
# multiprocess part
#
do_parallel_job()
logger.debug('FINISH')
解决
1. 使用atfork打一个monkeypatch(google已不再维护此方案,pypi也找不到此包)
解决的思路是给os.fork打一个patch,在每次fork之前和之后,对父进程和子进程做一些资源的获取和释放操作的hook调用
def os_forkpty_wrapper():
"""Wraps os.forkpty() to run atfork handlers."""
pid = None
prepare_to_fork_acquire()
try:
pid, fd = _orig_os_forkpty()
finally:
if pid == 0:
child_after_fork_release()
else:
parent_after_fork_release()
return pid, fd
def atfork(prepare=None, parent=None, child=None):
"""A Python work-a-like of pthread_atfork.
Any time a fork() is called from Python, all 'prepare' callables will
be called in the order they were registered using this function.
After the fork (successful or not), all 'parent' callables will be called in
the parent process. If the fork succeeded, all 'child' callables will be
called in the child process.
No exceptions may be raised from any of the registered callables. If so
they will be printed to sys.stderr after the fork call once it is safe
to do so.
"""
assert not prepare or callable(prepare)
assert not parent or callable(parent)
assert not child or callable(child)
_fork_lock.acquire()
try:
if prepare:
_prepare_call_list.append(prepare)
if parent:
_parent_call_list.append(parent)
if child:
_child_call_list.append(child)
finally:
_fork_lock.release()
# 以下截取对logging模块的patch部分
def fork_safe_createLock(self):
self._orig_createLock()
atfork(self.lock.acquire, self.lock.release, self.lock.release)
# Fix the logging.Handler lock (a major source of deadlocks).
logging.Handler._orig_createLock = logging.Handler.createLock
logging.Handler.createLock = fork_safe_createLock
# Fix the module level lock.
atfork(logging._acquireLock,
logging._releaseLock, logging._releaseLock)
可以看到在引入monkeypatch之后,对logging模块的createLock方法做了patch,每次创建完锁之后会将锁资源加入列表,在每次fork操作前,都会从前面加入列表中提供的方法获取锁,然后再进行fork操作,fork操作完成后,根据列表中的方法,父进程和子进程分别进行释放锁,简单来说就是避免了logging模块在fork时会有锁被占用的情况,这样就避免了可能出现的死锁问题。
具体使用时,在程序开始加载时引入以下monkey patch方法:
monkeypatch_os_fork_functions()
fix_logging_module()
在Python 3.7版本之后,官方也给出了os标准库的方法实现相同的功能:
os.register_at_fork(*, before=None, after_in_parent=None, after_in_child=None)
Register callables to be executed when a new child process is forked using os.fork() or similar process cloning APIs. The parameters are optional and keyword-only. Each specifies a different call point.
- before is a function called before forking a child process.
- after_in_parent is a function called from the parent process after forking a child process.
- after_in_child is a function called from the child process
2. 子进程的logging模块初始化一个新锁(当有其他第三方模块也会使用logging模块时修改就比较麻烦)
解决思路是,在子进程中初始化logging时,重新初始化一个锁,替换掉父进程中复制出来的可能被占用的锁
ch = logging.StreamHandler()
ch.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(funcName)s(): %(message)s'))
ch.setLevel(logging.DEBUG)
ch.createLock()