首页 > 代码库 > Linux Debugging (九) 一次生产环境下的“内存泄露”

Linux Debugging (九) 一次生产环境下的“内存泄露”

一个偶然的机会,发现一个进程使用了超过14G的内存。这个进程是一个RPC server,仅仅是作为中转,绝对不应该使用这么多内存的。即使并发量太多。存在内存中的数据太多。那么在并发降低的情况下,这个内存使用肯定会降下来。可是其实。这个内存会一直涨。直到被OOM Killer杀掉。

技术分享


因为这个rpc server的逻辑比較简单。先走读源代码,除了发现一些简单的编程上面的问题外。没有大的问题。先上valgrind:

valgrind --tool=memcheck --leak-check=full -v ./rpc_server

原来的情况是,一般都会检查出内存泄露的。这次没有(否则也不会有本文了):

技术分享


实际上至少10G的“内存泄露”。既然没有检查出,说明这些内存还是活着的。设想一下这个场景:每一个请求都new一块内存,放到一个列表中。正常的话请求处理完须要从这个列表中删除这块内存。假设没有删除。那么这就算是内存泄露。可是valgrind检查不出来。


因为上面这个进程使用了tcmalloc,是不是tcmalloc的问题?我们知道tcmalloc的效率要优于malloc。那么是不是tcmalloc的问题,假设它一直申请内存,不释放,就会造成这样的”内存泄露“。

注意以下一段话:

Releasing Memory Back to the System

By default, tcmalloc will release no-longer-used memory back to the kernel gradually, 
over time.
The tcmalloc_release_rate flag controls how quickly this happens. 
You can also force a release at a given point in the progam execution like so:

   MallocExtension::instance()->ReleaseFreeMemory();
You can also call SetMemoryReleaseRate() to change the tcmalloc_release_rate value
 at runtime, or GetMemoryReleaseRate to see what the current release rate is.

简单翻译一下,就是tcmalloc将内存交回OS的机制:默认情况下,tcmalloc会将长时间未用的内存交还系统。tcmalloc_release_rate这个flag控制了这个交回频率。你能够在执行时通过这个语句强制这个release发生:

 MallocExtension::instance()->ReleaseFreeMemory();

当然了。你能够通过SetMemoryReleaseRate() 来设置这个tcmalloc_release_rate. 假设设置为0。代表永远不交回。数字越大代表交回的频率越大。

一般合理的值就是设置一个0 - 10 之间的一个数。也能够通过环境变量设置TCMALLOC_RELEASE_RATE来设置这个rate。

带着这个怀疑,首先还是通过Google‘s gpreftools 检查一下heap的使用情况:

1.  export HEAPCHECK=draconian

2.  export PPROF_PATH=/usr/local/bin/pprof

直接启动就可以。

之所以设置为draconian。由于想得到更具体的统计信息。更将相信的解释例如以下:Flavors of Heap Checking


These are the legal values when running a whole-program heap check:

  1. minimal
  2. normal
  3. strict
  4. draconian

"Minimal" heap-checking starts as late as possible in a initialization, meaning you can leak some memory in your initialization routines (that run before main(), say), and not trigger a leak message. If you frequently (and purposefully) leak data in one-time global initializers, "minimal" mode is useful for you. Otherwise, you should avoid it for stricter modes.

"Normal" heap-checking tracks live objects and reports a leak for any data that is not reachable via a live object when the program exits.

"Strict" heap-checking is much like "normal" but has a few extra checks that memory isn‘t lost in global destructors. In particular, if you have a global variable that allocates memory during program execution, and then "forgets" about the memory in the global destructor (say, by setting the pointer to it to NULL) without freeing it, that will prompt a leak message in "strict" mode, though not in "normal" mode.

"Draconian" heap-checking is appropriate for those who like to be very precise about their memory management, and want the heap-checker to help them enforce it. In "draconian" mode, the heap-checker does not do "live object" checking at all, so it reports a leak unless all allocated memory is freed before program exit. (However, you can use IgnoreObject() to re-enable liveness-checking on an object-by-object basis.)

"Normal" mode, as the name implies, is the one used most often at Google. It‘s appropriate for everyday heap-checking use.

In addition, there are two other possible modes:

  • as-is
  • local

as-is is the most flexible mode; it allows you to specify the various knobs of the heap checker explicitly. local activates the explicit heap-check instrumentation, but does not turn on any whole-program leak checking.

可是非常不幸。还是没有检查出来:

技术分享


上面的泄露统计不是预期的,由于“泄露”了至少10G的内存了。

那么还是强制的释放不用的buffer吧:

MallocExtension::instance()->ReleaseFreeMemory();

问题攻克了。

Linux Debugging (九) 一次生产环境下的“内存泄露”