【Android】一个contentResolver引起的内存泄漏问题分析
长时间的压力测试后,系统发生了重启,报错log如下
JNI ERROR (app bug): global reference table overflow (max=51200)
global reference table overflow的log
08-08 04:11:53.052912 973 3243 F zygote64: indirect_reference_table.cc:256] JNI ERROR (app bug): global reference table overflow (max=51200)
08-08 04:11:53.053014 973 3243 F zygote64: indirect_reference_table.cc:256] global reference table dump:
08-08 04:11:53.053172 973 3243 F zygote64: indirect_reference_table.cc:256] Summary:
08-08 04:11:53.053184 973 3243 F zygote64: indirect_reference_table.cc:256] 27087 of com.android.server.content.ContentService$ObserverNode$ObserverEntry (27087 unique instances)
08-08 04:11:53.053197 973 3243 F zygote64: indirect_reference_table.cc:256] 22849 of java.lang.ref.WeakReference (22849 unique instances)
08-08 04:11:53.053210 973 3243 F zygote64: indirect_reference_table.cc:256] 313 of java.lang.Class (235 unique instances)
Backtrace:
#00 pc 000000000001d754 /system/lib64/libc.so (abort+120)
#01 pc 00000000004766f0 /system/lib64/libart.so (art::Runtime::Abort(char const*)+552)
#02 pc 000000000056c5ec /system/lib64/libart.so (android::base::LogMessage::~LogMessage()+1004)
#03 pc 0000000000264304 /system/lib64/libart.so (art::IndirectReferenceTable::Add(art::IRTSegmentState, art::ObjPtr<art::mirror::Object>)+764)
#04 pc 00000000002ff7fc /system/lib64/libart.so (art::JavaVMExt::AddGlobalRef(art::Thread*, art::ObjPtr<art::mirror::Object>)+68)
#05 pc 0000000000343834 /system/lib64/libart.so (art::JNI::NewGlobalRef(_JNIEnv*, _jobject*)+572)
#06 pc 000000000011fe5c /system/lib64/libandroid_runtime.so (JavaDeathRecipient::JavaDeathRecipient(_JNIEnv*, _jobject*, android::sp<DeathRecipientList> const&)+136)
#07 pc 000000000011f9a4 /system/lib64/libandroid_runtime.so (android_os_BinderProxy_linkToDeath(_JNIEnv*, _jobject*, _jobject*, int)+224)
根据经验,又是binder的溢出,因为在生成BinderProxy对象的时候,调用到了jobject javaObjectForIBinder方法,里面进行了NewGlobalRef操作
vi frameworks/base/core/jni/android_util_Binder.cpp
547 jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
548 {
549 if (val == NULL) return NULL;
585 jobject refObject = env->NewGlobalRef(
586 env->GetObjectField(object, gBinderProxyOffsets.mSelf));
587 val->attachObject(&gBinderProxyOffsets, refObject,
588 jnienv_to_javavm(env), proxy_cleanup);
如果太多的BinderProxy对象没有释放,就会导致global reference table overflow。
我们使用dumpsys meminfo <system_server_PID>
来进行查看验证
ps -ef|grep system_s
system 820 480 1 18:05:55 ? 01:47:51 system_server
得到system_server的进程号为820
dumpsys meminfo 820
Views: 9 ViewRootImpl: 2
AppContexts: 25 Activities: 0
Assets: 11 AssetManagers: 13
Local Binders: 401 Proxy Binders: 16509
Parcel memory: 288 Parcel count: 249
Death Recipients: 15939 OpenSSL Sockets: 0
可以看到Proxy Binders非常多
我们可以在Binder的构造以及析构方法中添加log来观察是哪里new了太多的对象。
我们可以使用 dumpsys meminfo <PID> 命令逐个查看是哪个进程里new了过多的Local Binders来进行排查。
在这个例子中,我们查看到log中有
08-08 04:11:53.053172 973 3243 F zygote64: indirect_reference_table.cc:256] Summary:
08-08 04:11:53.053184 973 3243 F zygote64: indirect_reference_table.cc:256] 27087 of com.android.server.content.ContentService$ObserverNode$ObserverEntry (27087 unique instances)
08-08 04:11:53.053197 973 3243 F zygote64: indirect_reference_table.cc:256] 22849 of java.lang.ref.WeakReference (22849 unique instances)
08-08 04:11:53.053210 973 3243 F zygote64: indirect_reference_table.cc:256] 313 of java.lang.Class (235 unique instances)
ContentService$ObserverNode$ObserverEntry的泄漏较多,
对于ContentService,刚好可以使用命令 dumpsys content来进行查看
我们发现里面有上万个
settings/global/always_on_display_constants: pid=10928 uid=10027 user=-1 target=6497568
settings/global/always_on_display_constants: pid=10928 uid=10027 user=-1 target=57cdd81
pid=10928 是systemui
Unknown:/ # ps -ef|grep systemui
u0_a27 10928 480 0 18:20:02 ? 00:38:59 com.android.systemui
再来验证下
dumpsys meminfo 10928
Local Binders: 15629 Proxy Binders: 67
Parcel memory: 22 Parcel count: 83
Death Recipients: 2 OpenSSL Sockets: 0
再根据always_on_display_constants进行查找,就找到了注册Observer的地方了。
就定位到泄漏点了。