Memory optimization for Android performance optimization


To become an excellent Android Developer, you need a complete Knowledge system , here, let's grow up as we think ~.

Tips: This is the basic chapter of exploring Android memory optimization. If you don't master Android memory optimization, you are advised to study it systematically.

As we all know, memory optimization can be said to be one of the most important optimization points in performance optimization. It can be said that if you don't master the memory optimization scheme of the system, you can't say that you have too much research and Exploration on Android performance optimization. In this article, the author will lead you to systematically study memory optimization in Android.

As many readers may know, in terms of memory management, the JVM has a garbage memory recycling mechanism, which will automatically allocate and release memory at the virtual machine level. Therefore, there is no need to allocate and release a piece of memory in the code like using C/C + +. The memory management of the Android system is similar to that of the JVM. The new keyword is used to allocate memory for objects, and the memory is recycled by GC. In addition, the Android system has a general heap memory model in memory management. When the memory reaches a certain threshold, the system will automatically release the memory that can be released according to different rules. Even with the memory management mechanism, unreasonable use of memory will also cause a series of performance problems, such as memory leakage, memory jitter, allocating a large number of memory objects in a short time, and so on. Next, let me talk about the memory management mechanism of Android.

1, Android memory management mechanism

As we all know, the memory allocation and garbage collection of applications are completed by Android virtual machine. Dalvik virtual machine is used below Android 5.0, and ART virtual machine is used at 5.0 and above.

1. Java object lifecycle

After the bytecode. class file generated after java code compilation is loaded into the virtual machine from the file system, there will be Java objects on the JVM. Java objects run on the JVM in seven stages, as follows:

  • Created
  • InUse
  • Invisible
  • Unreachable
  • Collected
  • Finalized
  • Deallocated

1. Created

The creation of Java objects is divided into the following steps:

  • 1. Allocate storage space for objects.
  • 2. Construct objects.
  • 3. static members are initialized from superclass to subclass. static members of a class are initialized when ClassLoader loads the class.
  • 4. Superclass member variables are initialized in order, and the constructor of the superclass is called recursively.
  • 5. Subclass member variables are initialized sequentially. Once an object is created, the subclass constructor calls the object and assigns values to some variables.

2. InUse (application)

At this point, the object is held by at least one strong reference.

3. Invisible

When an object is invisible, it means that the program itself no longer holds any strong references to the object, although the object still exists. A simple example is that the execution of the program has exceeded the scope of the object. However, the object may still be held by some loaded static variable threads or JNI strong references under the virtual machine. These special strong references are called "GC Root". Objects strongly referenced by these GC roots will cause memory leakage of the object and cannot be recycled by GC.

4. Unreachable

The object is no longer held by any strong reference.

5. Collected

When the GC is ready to reallocate the memory space of the object, the object enters the collection phase. If the object overrides the finalize() method, it is executed.

6. Finalized

Wait for the garbage collector to reclaim the object space.

7. Deallocated (object space reallocation)

If GC reclaims or reallocates the memory space occupied by the object, the object will disappear completely.

be careful

  • 1. When the object does not need to be used, empty it in time.
  • 2. Accessing local variables is better than accessing variables in classes.

2. Java memory allocation model

The JVM divides the whole memory into several blocks as follows:

  • 1) . method area: store class information, constants, static variables, etc. = > Shared by all threads
  • 2) Virtual machine stack: store local variable table, operand stack, etc.
  • 3) Local method stack: different from the virtual machine stack, which serves Java methods, it serves Native methods.
  • 4) Heap: the area with the largest memory. The actual allocated memory of each object is allocated on the heap, and only references are allocated in the virtual machine stack. These references will point to the objects actually stored in the heap. In addition, the heap is also the main area of the garbage collector (GC), and memory leaks also occur in this area. = > Shared by all threads
  • 5) . program counter: stores the line to which the target method of the current thread has been executed.

3. Android memory allocation model

In Android, the heap is actually an anonymous shared memory. The Android virtual machine just encapsulates it into an mSpace, which is managed by the underlying C library, and still uses the functions malloc and free provided by libc to allocate and free memory.

Most static data is mapped to a shared process. Common static data include Dalvik Code, app resources, so files, and so on.

In most cases, Android implements the mechanism that dynamic RAM areas can be shared between different processes by displaying and allocating shared memory areas (such as Ashmem or Gralloc). For example, Window Surface uses shared memory between App and screen composer, and Cursor Buffers shares memory between content providers and Clients.

As mentioned above, there are two kinds of virtual machines for Android Runtime, Dalvik and ART. Their allocated memory area blocks are different. Let's have a brief understanding.


  • Linear Alloc
  • Zygote Space
  • Alloc Space


  • Non Moving Space
  • Zygote Space
  • Alloc Space
  • Image Space
  • Large Obj Space

Whether Dlavik or ART, the runtime heap is divided into Linear Alloc (Non Moving Space similar to ART), Zygote Space and Alloc Space. The Linear Alloc in Dalvik is a linear memory space and a read-only area. It is mainly used to store classes in the virtual machine, because only the read-only attribute is required after the class is loaded, and it will not be changed. Putting these read-only attributes and permanent data that cannot end in the whole process life cycle into the linear allocator for management can well reduce heap confusion and GC scanning and improve the performance of memory management. Zygote Space is shared between zygote process and application process, and Allocation Space is exclusive to each process. The first virtual machine of the Android system is created by the zygote process and has only one Zygote Space. However, before fork ing the first application process, the zygote process will divide the used heap memory into one part and the unused heap memory into another part, that is, Allocation Space. However, both application processes and zygote processes allocate objects on their respective Allocation Space heap.

When ART is running, there are two other blocks, namely ImageSpace and Large Object Space.

  • Image Space: stores some preloaded classes, similar to Linear Alloc in Dalvik. Like Zygote Space, it is shared between the Zygote process and the application process.
  • Large Object Space: a collection of discrete addresses. It is used to allocate some large objects to improve the management efficiency and overall performance of GC.

Note: the object of Image Space is created only once, while the object of Zygote Space needs to be re created every time the system starts according to the running situation.

4. Java memory reclamation algorithm

1) Mark clear algorithm

Implementation principle

  • Mark all objects that need to be recycled.
  • Uniformly recycle all marked objects.


  • Marking and removal efficiency is not high.
  • Generate a large number of discontinuous memory fragments.

2) Replication algorithm

Implementation principle

  • Divide the memory into two equal sized blocks.
  • After one block of memory runs out, copy the living objects to another block.
  • Clean up another piece of memory.


  • Simple implementation and efficient operation.
  • It's expensive to waste half the space.

3) Marking sorting algorithm

Implementation principle

  • The marking process is the same as the mark clear algorithm.
  • The living object moves to one end.
  • Clean up the remaining memory.


  • Avoid memory fragmentation caused by the mark clear algorithm.
  • Avoid space waste of replication algorithm.

4) Generational collection algorithm (the algorithm selected by most virtual machine manufacturers)


  • Combine the advantages of multiple collection algorithms.
  • Low survival rate of new generation objects = > "replication" algorithm (note that the replication proportion can be adjusted every time, for example, only 30% of the surviving objects can be replicated at a time).
  • High survival rate of objects in old age = > "mark sort" algorithm.

5. Android memory recycling mechanism

For Android devices, every time we open an APP, its memory is allocated elastically, and its allocation value and maximum value are determined by the specific device.

In addition, we need to distinguish between the following two OOM scenarios:

  • 1) . the memory is really insufficient: for example, the maximum memory limit of APP current process is 512 MB. When this value is exceeded, it indicates that the memory is really insufficient.
  • 2) Insufficient available memory: the memory of the mobile phone system is extremely tight. Even if the maximum memory limit of the current APP process is 512 MB, we only allocate 200 MB, which will cause memory overflow because the available memory of the system is insufficient.

In the advanced system version of Android, there is a general Heap memory model for Heap space, in which the whole memory is divided into three areas:

  • Young Generation
  • Old Generation
  • Permanent Generation

The schematic diagram of the model is as follows:

1,Young Generation

It consists of one Eden area and two Survivor areas. Most of the new objects generated in the program are in Eden area. When Eden area is full, the surviving objects will be copied to one of the Survivor areas. When this Survivor area is full, the surviving objects in this area will be copied to another Survivor area. When this Survivor area is also full, the surviving objects will be copied to the old generation.

2,Old Generation

In general, the life cycle of objects in the elderly generation is relatively long.

3,Permanent Generation

For static classes and methods, persistent generation has no significant impact on garbage collection. (in JDK 1.8 and later versions, the meta space implemented in local memory has replaced the permanent generation)

4. Summary of memory object processing

  • 1. The object is created in the Eden area.
  • 2. After GC is executed, if the object is still alive, it is copied to S0 area.
  • 3. When S0 area is full, the surviving objects in this area will be copied to S1 area, then S0 will be cleared, and then S0 and S1 roles will be exchanged.
  • 4. When step 3 reaches a certain number of times (there will be differences between different system versions), the surviving objects will be copied to Old Generation.
  • 5. When the object stays in the Old Generation area for a certain time, it will be moved to the Old Generation area. Finally, it will accumulate a certain time and then move to the Permanent Generation area.

The system adopts different recycling mechanisms on Young Generation and Old Generation. The memory area of each Generation has a fixed size. As new objects are successively allocated to this area, when the total size of objects approaches the threshold of this level of memory area, GC operation will be triggered to make room for other new objects.

In addition, the time taken to execute GC is related to the number of objects in Generation and Generation, as follows:

  • Young Generation < Old Generation < Permanent Generation
  • The number of objects in Generation is inversely proportional to the execution time.

5,Young Generation GC

Due to the short survival time of its objects, it is recycled based on the Copying algorithm (scanning out the living objects and Copying them to a new completely unused control). The new generation uses the method of free pointer to control GC triggering. The pointer maintains the position of the last allocated object in the Young Generation interval. When there is a new object to allocate memory, it is used to check whether the space is sufficient. If not, GC will be triggered.

6,Old Generation GC

Because its objects have a long survival time and are relatively stable, the Mark algorithm is adopted (scan the surviving objects, then recycle the unmarked objects, and after recycling, the empty space is either merged or marked for the next allocation to reduce the efficiency loss caused by memory fragmentation).

7. What is the difference between Dalvik and ART

  • 1) . Dalivk only fixes one recycling algorithm.
  • 2) ART recovery algorithm can be selected during operation.
  • 3) ART has the ability to organize memory and reduce memory holes.

6. GC type

There are three types of GC in Android system:

  • kGcCauseForAlloc: GC caused by insufficient allocated memory will stop the world. Because it is a concurrent GC, other threads will stop until the GC completes.
  • kGcCauseBackground: GC triggered when the memory reaches a certain threshold. Because it is a background GC, it will not cause Stop World.
  • kGcCauseExplicit: displays the GC performed when calling. When ART turns on this option, GC will be performed when using System.gc.

Next, let's learn how to analyze GC logs in Android virtual machine. The logs are as follows:

D/dalvikvm(7030): GC_CONCURRENT freed 1049K, 60% free 2341K/9351K, external 3502K/6261K, paused 3ms 3ms

GC\_ Current is the current GC type. There are several types in the GC log:

  • GC\_ Current: when the Heap memory usage in the application rises (the allocated object size exceeds 384k), avoid GC triggered when the Heap memory is full. If a large number of GC are found\_ The occurrence of current indicates that objects larger than 384k may always be allocated in the application, which is generally caused by the repeated creation of some temporary objects, which may be caused by insufficient object reuse.
  • GC\_FOR\_MALLOC: This is because the Concurrent GC is not completed in time, and the application needs to allocate more memory. At this time, it has to stop for Malloc GC.
  • GC\_EXTERNAL\_ALLOC: This is the GC executed for external allocated memory.
  • GC\_HPROF\_DUMP\_HEAP: executed when creating an HPROF profile.
  • GC\_EXPLICIT: shows that System.GC() was called. (try to avoid)

Let's go back to the log printed above:

  • Free 1049k: indicates how much memory was reclaimed in this GC.
  • 60% free 2341k/9351K: it indicates that 60% of the recovered heap is available, the size of the surviving object is 2341kb, and the size of the heap is 9351kb.
  • external 3502/6261K: data of Native Memory. Store data such as Bitmap Pixel Data or off heap memory (NIO Direct Buffer). The first value indicates that 3502kb memory has been allocated in Native Memory. The second value is a floating GC threshold. When the allocated memory reaches this value, GC will be triggered once.
  • paused 3ms 3ms: indicates the pause time of GC. In case of Concurrent GC, you will see two times, one start and one end, and the time is very short. In case of other types of GC, you may only see one time, and the time is relatively long. Moreover, the larger the Heap Size, the longer the pause time during GC.

Note: in ART mode, a Large Object Space is added. This part of memory is not allocated on the heap, but it still belongs to the memory space of the application.

In Dalvik virtual machine, GC operations are concurrent, which means that each triggering of GC will cause other threads to suspend work (including UI threads). In the ART mode, unlike Dalvik, which has only one recycling algorithm, ART will choose different recycling algorithms under different circumstances. For example, when Alloc memory is insufficient, non concurrent GC will be used, but after Alloc, it will trigger concurrent GC when the memory reaches a certain threshold. Therefore, in ART mode, not all GCs are non concurrent.

In general, in terms of GC, ART is more efficient than Dalvik. It is not only the efficiency of GC, but also greatly shortens the Pause time. In terms of memory allocation, it allocates a separate area for large memory. It can also have algorithms to organize memory in the background to reduce memory fragmentation. Therefore, under the ART virtual machine, more Caton problems similar to GC can be avoided.

7. Low Memory Killer mechanism

The LMK mechanism is designed for all processes of the mobile phone system. When the memory of our mobile phone is insufficient, the LMK mechanism will recycle all our processes, and its recycling efforts are different for different processes. At present, the process types of the system are mainly as follows:

  • 1) . foreground process
  • 2) Visible process
  • 3) . service process
  • 4) . background process
  • 5) , empty process

From the foreground process to the empty process, the priority of the process will become lower and lower. Therefore, the probability of it being killed by the LMK mechanism will increase accordingly. In addition, the LMK mechanism will also comprehensively consider the recovery income, so as to ensure that most of our processes will not run out of memory.

2, Significance of optimizing memory

The significance of optimizing memory is self-evident. Generally speaking, it can be summarized into the following four points:

  • 1. Reduce OOM and improve application stability.
  • 2. Reduce jamming and improve application fluency.
  • 3. Reduce memory consumption and improve the survival rate of application background running.
  • 4. Reduce the occurrence of exceptions and hidden dangers of code logic.

It should be noted that OOM is caused by memory overflow. This situation does not necessarily occur in the corresponding code, nor does it necessarily mean that the code using OOM has memory problems, but just executes this code.

3, Avoid memory leaks

1. Definition of memory leak

The garbage collection of Android virtual machine is realized through the GC mechanism of virtual machine. GC will select some surviving objects as the root node GC Roots of memory traversal, and judge whether recycling is needed through the accessibility of GC Roots. Memory leak is that objects that are no longer used in the current application cycle are referenced by GC Roots, resulting in failure to recycle, making the actual available memory smaller.

2. Use MAT to find memory leaks

MAT tools can help developers locate objects that cause memory leaks, find large memory objects, solve memory leaks, and reduce memory consumption by optimizing memory objects.

Use steps

1. In

2. Enter the Memory view of the Profile from Android Studio, select the application process to be analyzed, and operate the application with suspected Memory problems. After the operation, take the initiative to GC several times, and finally export the dump file.

3. Because Android Studio saves. Hprof files in Android Dalvik/ART format, it needs to be converted to J2SE HPROF format to be recognized and analyzed by MAT. The Android SDK comes with a conversion tool. Under the platform tools of the SDK, the conversion statement is:

./hprof-conv file.hprof converted.hprof

4. Open the converted HPROF file through MAT.

MAT view

On the MAT window, overviews are an OverView showing the overall memory consumption and suspected problems. MAT provides a variety of analysis dimensions, among which the analysis dimensions of Histogram, domino tree, Top Consumers and Leak Suspects are different. They are described below as follows:


List all instance type objects, their number and size in memory, and support regular expression lookup in the regex area at the top.

2,Dominator Tree

Lists the largest objects and their dependent surviving objects. Compared with Histogram, it is more convenient to see the reference relationship.

3,Top Consumers

List the largest objects by image.

4,Leak Suspects

Automatically analyze the cause of memory leakage and an overall report of leakage through MAT.

The two most commonly used views for analyzing memory are Histogram and domino tree. There are four columns in the view:

  • Class Name: Class Name.
  • Objects: number of object instances.
  • Shallow Heap: the amount of memory occupied by the object itself, excluding the objects it references. The Shallow Heap Size of a non array regular object is determined by the number and type of its member variables. The Shallow Heap Size of an array is determined by the type of array elements (object type, basic type) and array length. The real memory is on the heap. It looks like a pile of native byte [], char [], int []. The memory of the object itself is very small. Therefore, Shallow Heap is not of great significance for analyzing memory leaks.
  • Retained Heap: the sum of the size of the current object and the size of the objects that the current object can directly or indirectly reference, including those released recursively. That is, Retained Size is the total memory size that can be released from the Heap after the current object is GC.

Find the specific location of the memory leak

Conventional mode

  • 1. Filter instances by package name type or directly use Regex at the top to select specific instances.
  • 2. Right click the suspected instance object and select merge shortest paths to GC root - > exclude all phantom / weak / soft etc references. (displays a strong reference to the shortest path of GC Roots)
  • 3. Analyze the reference chain or find the cause through code logic.

Another faster method is to compare HPROF data before and after leakage:

  • 1. In the two HPROF files, add Histogram or domino tree to Compare Basket.
  • 2. Click in Compare Basket, Generate a comparison result view. In this way, you can compare the number of object instances and memory footprint of the same object in different stages. For example, the number of object instances that obviously only need one instance or should not be increased increases, indicating that a memory leak occurs. You need to locate the specific cause in the code and solve it.

It should be noted that if the target is not clear, you can directly locate the Object with the largest retainedhead, view the reference chain through Select incoming references, locate the suspicious Object, and then analyze the reference chain through Path to GC Roots.

In addition, we know that when too many objects in the Hash set return the same Hash value, it will seriously affect the performance. At this time, we can use the Map Collision Ratio to find the culprit causing the high collision rate of the Hash set.

Efficient way

In my usual project development, I usually use the following methods to quickly detect the memory leakage of the specified page (also known as runtime memory analysis and Optimization):

  • 1. Shell command + LeakCanary + MAT: run the program and run all functions to ensure that there is no problem. Exit the program completely, manually trigger GC, and then use the adb shell dumpsys meminfo packagename -d command to check whether the number of Views and Activities under Objects is 0 after exiting the interface. If not, check for possible memory leaks through LeakCanary, Finally, through MAT analysis, so repeatedly, the improvement is satisfactory.
  • 2. Profile MEMORY: run the program to analyze and check the memory of each page. First, repeatedly open and close the page for 5 times, and then receive the GC (click the trash can icon in the upper left corner of the Profile MEMORY). If the total memory has not been restored to the previous value at this time, memory leakage may occur. At this time, click the heap dump button next to the trash can icon in the upper left corner of Profile MEMORY to view the current memory stack. Select find by package name to find the Activity under test. If multiple instances are referenced, it indicates that a memory leak has occurred.
  • 3. Starting from the home page, dump the memory snapshot file of each page in turn, and then use the comparison function of MAT to find out what has been added in the memory of each page relative to the previous page for targeted optimization.
  • 4. Use Android Memory Profiler to observe the memory changes after entering each page in real time, and then analyze the large memory peaks generated.

In addition, in addition to the analysis and optimization of runtime memory, we can also analyze and optimize the static memory of App. Static memory refers to the part of memory that always exists along with the whole life cycle of the App. How can we get a snapshot of this part of memory?

First, make sure to open the main functions of each main page, and then go back to the home page and enter the developer option to open "do not keep background activities". Then, withdraw our app to the background, GC and dump the memory snapshot. Finally, we can analyze the memory snapshot generated by dump to see what can be optimized, such as loaded pictures, global single instance data configuration in the application, static memory and cache, buried point data, memory leakage, etc.

3. Common memory leak scenarios

For memory leakage, its essence can be understood as the inability to recycle useless objects. Here I summarize some common memory leakage cases (including solutions) I encountered in the project.

1. Resource object not closed

When a resource object is no longer used, its close() function should be called immediately, closed, and then set to null. For example, not closing resources such as Bitmap will cause memory leakage. At this time, we should close them in time when the Activity is destroyed.

2. The registered object is not unregistered

For example, if BraodcastReceiver and EventBus are not logged off, we should log off in time when the Activity is destroyed.

3. The static variables of the class hold large data objects

Try to avoid using static variables to store data, especially large data objects. It is recommended to use database storage.

4. Memory leak caused by singleton

Give priority to the Context of Application. If you need to use the Context of Activity, you can use a weak reference to encapsulate it when passing in the Context, and then get the Context from the weak reference where you use it. If you can't get it, you can return directly.

5. Static instances of non static inner classes

The life cycle of the instance is as long as that of the application, which causes the static instance to always hold the reference of the Activity, and the memory resources of the Activity cannot be recycled normally. At this time, we can set the internal class as a static internal class or extract the internal class and encapsulate it into a single instance. If we need to use Context, we should try to use Application Context. If we need to use Activity Context, we should remember to empty it after use so that GC can recycle, otherwise memory leakage will occur.

6. Handler temporary memory leak

After the Message is sent, it is stored in the MessageQueue. There is a target in the Message, which is a reference to the Handler. If the Message exists in the Queue for too long, the Handler cannot be recycled. If the Handler is non static, the Activity or Service will not be recycled. Moreover, the Message Queue continuously polls and processes messages in a Looper thread. When the Activity exits, there are still unprocessed messages or messages being processed in the Message Queue, and the Message in the Message Queue holds the reference of the Handler instance, and the Handler holds the reference of the Activity, so the memory resources of the Activity cannot be recovered in time, Causes a memory leak. The solution is as follows:

  • 1. Use a static Handler inner class, and then use weak references to the objects held by the Handler (generally activities), so that the objects held by the Handler can also be recycled during recycling.
  • 2. When you Destroy or Stop the Activity, you should remove the messages in the message queue to avoid the messages to be processed in the message queue of the Looper thread.

It should be noted that AsyncTask is also a Handler mechanism, which also has the risk of memory leakage, but it is generally temporary. For memory leaks caused by AsyncTask or threads, we can also separate AsyncTask from Runnable classes or use static internal classes.

7. Memory leak caused by objects in container not cleaned up

Before exiting the program, clear the things in the collection, set them to null, and then exit the program


WebView has the problem of memory leakage. As long as WebView is used once in the application, the memory will not be released. We can start an independent process for WebView and use AIDL to communicate with the main process of the application. The process where WebView is located can be destroyed at an appropriate time according to the needs of the business to achieve the purpose of normal memory release.

9. Memory leak caused by using ListView

When constructing an Adapter, use the cached convertView.

4. Memory leak monitoring

Generally, LeakCanary can be used to monitor memory leakage. Please refer to my previous articles for specific use and principle analysis Source code analysis of Android mainstream third-party libraries (VI. in depth understanding of Leakcanary source code).

In addition to basic usage, we can also customize the processing results. First, inherit displayliakservice to implement a custom monitoring processing Service. The code is as follows:

public class LeakCnaryService extends DisplayLeakServcie {
    private final String TAG = "LeakCanaryService";
    protected void afterDefaultHandling(HeapDump heapDump, AnalysisResult result, String leakInfo) {

Override the afterdefaulthandling method to process the required data. The three parameters are defined as follows:

  • heapDump: heap memory file. You can get the complete hprof file for MAT analysis.
  • result: the monitored memory status, such as whether it is leaking.
  • leakInfo: leak trace details, including device information in addition to memory leak objects.

Then, when install ing, use the custom LeakCanaryService. The code is as follows:

public class BaseApplication extends Application {
    public void onCreate() {
        mRefWatcher = LeakCanary.install(this, LeakCanaryService.calss, AndroidExcludedRefs.createAppDefaults().build());

After such processing, you can implement your own processing methods in LeakCanaryService, such as rich prompt information, save the data locally and upload it to the server for analysis.

be careful

LeakCanaryService needs to be registered in AndroidManifest.

4, Optimize memory space

1. Object reference

Since java version 1.2, three object reference methods have been introduced: SoftReference, WeakReference and phantom reference. The main function of the reference class is to refer to objects that can still be recycled by the garbage collector. Before introducing a reference class, you can only use Strong Reference. If no object reference type is specified, the default is Strong Reference. Next, let's introduce these references respectively.

1. Strong reference

If an object has a strong reference, GC will never recycle it. When there is insufficient memory space, the JVM throws an OOM error.

2. Soft reference

If an object has only soft references, the memory space is enough, and it will not be recycled during GC; If there is not enough memory, the memory of these objects will be reclaimed. Can be used to implement memory sensitive caching.

A soft reference can be used in conjunction with a ReferenceQueue. If the object referenced by the soft reference is recycled by the garbage collector, the JVM will add the soft reference to the associated reference queue.

3. Weak reference

When the garbage collector thread scans the memory area under its jurisdiction, once it finds an object with only weak references, it will reclaim its memory regardless of whether the current memory space is sufficient. However, since the garbage collector is a low priority thread, it is not necessary to quickly find objects with only weak references.

Note here that you may need to run GC multiple times to find and release weak reference objects.

4. Virtual reference

It can only be used to track the collection of referenced objects. The virtual machine must be used in conjunction with the ReferenceQueue class. Because it can act as a notification mechanism.

2. Reduce unnecessary memory overhead


The core of automatic boxing is to convert the basic data type into the corresponding complex type. During automatic boxing conversion, a new object will be generated, which will incur more memory and performance overhead. For example, int only occupies 4 bytes, while Integer object has 16 bytes. Especially for containers such as HashMap, a large number of automatic boxing operations will be generated when adding, deleting, modifying and querying.

Detection mode

It takes time to use TraceView to view. If it is found that a large number of integer.value are called, it indicates that AutoBoxing has occurred.

2. Memory reuse

For memory reuse, there are four feasible ways:

  • Resource reuse: reuse of common string, color definition and simple page layout.
  • View reuse: you can use ViewHolder to implement ConvertView reuse.
  • Object pool: displays the creation of an object pool, implements reuse logic, and uses the same memory space for the same type of data.
  • Reuse of bitmap objects: use the inBitmap attribute to tell the bitmap decoder to try to use the existing memory area, and the newly decoded bitmap will try to use the pixel data memory area occupied by the previous bitmap in the heap.

3. Use the best data type

1. HashMap and ArrayMap

HashMap is a hash linked list. When put ting elements into HashMap, first recalculate the hash value according to the HashCode of the key, and then get the position of this element in the array according to the hash value. If other elements have been stored in this position of the array, the elements in this position will be stored in the form of a linked list, the newly added elements will be placed at the head of the chain, and the last added elements will be placed at the end of the chain. If there is no element in the array at that position, the element is directly placed at that position in the array. That is, before inserting an object into the HashMap, an index to the hash array is given, and the value of the key object is saved in the index position. This means that the biggest problem to be considered is hash conflict. When multiple objects are hashed at the same position in the array, there will be hash conflict. Therefore, HashMap will configure a large array to reduce potential conflicts, and there will be other logic to prevent link algorithms and some conflicts.

ArrayMap provides the same functions as HashMap, but avoids excessive memory overhead by using two small arrays instead of a large array. And the ArrayMap is continuous in memory.

Generally speaking, when performing insert or delete operations in ArrayMap, it is worse than HashMap in terms of performance, but if only a small number of objects, such as less than 1000, you don't need to worry about this problem. Because ArrayMap will not allocate too large arrays at this time.

In addition, Android itself also provides a series of optimized data collection tool classes, such as SparseArray, SparseBooleanArray and LongSparseArray. Using these API s can make our program more efficient. The HashMap tool class is relatively inefficient because it needs to provide an object entry for each key value pair, and SparseArray avoids the time of converting the basic data type to the object data type.

2. Use IntDef and StringDef instead of enumeration types

The dex size of enumeration type is more than 13 times that of ordinary constant definition. Meanwhile, for memory allocation at runtime, the declaration of an enum value will consume at least 20bytes.

The greatest advantage of enumeration is type safety, but on the Android platform, the memory overhead of enumeration is more than three times that of directly defining constants. So Android provides an annotation way to check type safety. At present, two annotation methods of int type and String type are provided: IntDef and StringDef, which are used to provide type checking at compile time.

be careful

The use of IntDef and StringDef requires the introduction of corresponding dependent packages in the Gradle configuration:

compile ''


Recently, the cache is used at least, and strong references are used to save the objects to be cached. It internally maintains a two-way list composed of LinkedHashMap and does not support thread safety. LruCache encapsulates it and adds thread safety operations. When one of the values is accessed, it is placed at the end of the queue. When the cache is full, the value at the head of the queue (the least accessed recently) is discarded and can then be recycled by GC.

In addition to the normal get/set method, there is also the sizeOf method, which is used to return the size of each cache object. In addition, there is the entryRemoved method, which is called when a cache object is discarded. When the first parameter is true, it indicates that the cache object is cleaned up to make room. Otherwise, it indicates that the entry of the cache object is removed by remove or overwritten by put.

be careful

When allocating LruCache size, you should consider how much memory is left in the application.

4. Picture memory optimization

In Android, by default, when a picture file is decoded into a bitmap, it will be processed into 32bit / pixel. The red, green, blue and transparent channels are 8bit each. Even the pictures without transparent channels, such as JEPG, have no transparent channels, but they will be processed into 32bit bitmaps. In this way, the 8bit transparent channel data in the allocated 32bit is useless, which is completely unnecessary. Before these pictures are rendered on the screen, they must first be transmitted to the GPU as textures, This means that each picture will occupy both CPU memory and GPU memory. Below, I summarize several common ways to reduce memory overhead, as follows:

1. Set the specification of bitmap: RGB can be considered when displaying small pictures or requiring low picture quality\_ 565. Users can generally try ARGB for avatars or rounded images\_ 4444. Different bitmap specifications are realized by setting the inpredconfig parameter. The code is as follows:

BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.RGB_565;
BitmapFactory.decodeStream(is, null, options);

2. inSampleSize: the inSampleSize property in the bitmap function object implements the scaling function of the bitmap. The code is as follows:

BitampFactory.Options options = new BitmapFactory.Options();
// Set to 4 to change the width and height to the original 1 / 4 size of the picture
options.inSampleSize = 4;
BitmapFactory.decodeSream(is, null, options);

3. inScaled, inDensity and inTargetDensity achieve finer scaling of pictures: when inScaled is set to true, the system will divide the target density according to the existing density, and the code is as follows:

BitampFactory.Options options = new BitampFactory.Options();
options.inScaled = true;
options.inDensity = srcWidth;
options.inTargetDensity = dstWidth;
BitmapFactory.decodeStream(is, null, options);

Disadvantages of the above three schemes: too many algorithms are used, resulting in more time overhead in the picture display process. If there are many pictures, it will affect the picture display effect. The best solution is to combine these two methods to achieve the best performance. First, use inSampleSize to process the picture and convert it to the power of 2 close to the target, and then use inDensity and inTargetDensity to generate the final desired exact size, because inSampleSize will reduce the number of pixels, and the pixels will be re filtered based on the need of output password. However, to obtain the size of the resource picture, you need to set the inJustDecodeBounds value of the bitmap object to true, and then continue to decode the picture file, so as to produce the width and height data of the picture and allow the picture to continue to be optimized. The overall code is as follows:

BitmapFactory.Options options = new BitampFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeStream(is, null, options);
options.inScaled = true;
options.inDensity = options.outWidth;
options.inSampleSize = 4;
Options.inTargetDensity = desWith * options.inSampleSize;
options.inJustDecodeBounds = false;
BitmapFactory.decodeStream(is, null, options);


It can be implemented in combination with LruCache. When LruCache removes pictures exceeding the cache size, it temporarily caches Bitamp to a soft reference set. When it needs to create a new Bitamp, it can find the most suitable Bitmap from this soft reference set to reuse its memory area.

It should be noted that the newly applied bitmap and the old bitmap must have the same decoding format, and before Android 4.4, only Bitamp memory areas of the same size can be reused, while after Android 4.4, any bitmap memory area can be reused.

6. Picture placement optimization

The UI only needs to provide a set of high-resolution images. It is recommended to put the images in the drawable xxhdpi folder. In this way, the size of the images in low-resolution devices is only compressed and there will be no increase in memory. If you encounter files that do not need to be scaled, put them in the drawable nodpi folder.

7. Actively release memory when the available memory of App is too low

When the App retreats to the background and the memory is tight and is about to be killed, choose to rewrite the onTrimMemory/onLowMemory method to release the image cache and static cache.

8. The item is recycled and the reference to the image is released when it is invisible

  • ListView: therefore, every time an item is recycled and reused, it will rebind the data. Just release the image reference when ImageView onDetachFromWindow.
  • RecyclerView: when the recycled item is invisible, the first choice is to put it into the mCacheView. Here, the item is reused and does not only need the bindViewHolder to rebind the data. The data will not be rebound until it is recycled into the mrcyclepool and reused. Therefore, rewrite onViewRecycled() in Recycler.Adapter Method to release the image reference when the item is recycled into the RecyclePool.

9. Avoid creating unnecessary objects

For example, we can use StringBuffer and StringBuilder when splicing strings.

10. Memory optimization in custom View

For example, do not create objects in the onDraw method. Generally speaking, you should create objects in the constructor of a custom View.

11. Other memory optimization considerations

In addition to the above memory optimization points, here are some memory optimization points that we need to pay attention to, as shown below:

  • Use static final to optimize member variables whenever possible.
  • Use the enhanced for loop syntax.
  • If there is no special reason, try to use the basic data type instead of the encapsulated data type. int is more effective than Integer, and so are other data types.
  • Use soft and weak references when appropriate.
  • Memory cache and disk cache are adopted.
  • Try to use static inner classes to avoid potential memory leakage caused by inner classes.

5, Design and implementation of image management module

When designing a module, the following points need to be considered:

  • 1. Single responsibility
  • 2. Avoid coupling between different functions
  • 3. Interface isolation

Before writing the code, draw the UML diagram to determine the function of each object, method and interface. First, try to achieve the principle of single function. On this basis, clarify the direct relationship between modules, and finally use the code to realize it.

1. Realize asynchronous loading function

1. Realize network picture display

ImageLoader is the base class for image loading. ImageLoader has an internal class BitmapLoadTask, which inherits AsyncTask's asynchronous download management class and is responsible for image downloading and refreshing. MiniImageLoader is a subclass of ImageLoader. The maintenance class is a single instance of ImageLoader and implements the network loading function of the base class, Because specific downloads have different download engines in the application, they are abstracted into interfaces for easy replacement. The code is as follows:

public abstract class ImageLoader {
    private boolean mExitTasksEarly = false;   //End in advance
    protected boolean mPauseWork = false;
    private final Object mPauseWorkLock = new   Object();

    protected ImageLoader() {


    public void loadImage(String url, ImageView imageView) {
        if (url == null) {

        BitmapDrawable bitmapDrawable = null;
        if (bitmapDrawable != null) {
        } else {
            final BitmapLoadTask task = new BitmapLoadTask(url, imageView);

    private class BitmapLoadTask extends AsyncTask<Void, Void, Bitmap> {

        private String mUrl;
        private final WeakReference<ImageView> imageViewWeakReference;

        public BitmapLoadTask(String url, ImageView imageView) {
            mUrl = url;
            imageViewWeakReference = new WeakReference<ImageView>(imageView);

        protected Bitmap doInBackground(Void... params) {
            Bitmap bitmap = null;
            BitmapDrawable drawable = null;

            synchronized (mPauseWorkLock) {
                while (mPauseWork && !isCancelled()) {
                    try {
                    } catch (InterruptedException e) {

            if (bitmap == null
                    && !isCancelled()
                    && imageViewWeakReference.get() != null
                    && !mExitTasksEarly) {
                bitmap = downLoadBitmap(mUrl);
            return bitmap;

        protected void onPostExecute(Bitmap bitmap) {
            if (isCancelled() || mExitTasksEarly) {
                bitmap = null;

            ImageView imageView = imageViewWeakReference.get();
            if (bitmap != null && imageView != null) {
                setImageBitmap(imageView, bitmap);

        protected void onCancelled(Bitmap bitmap) {
            synchronized (mPauseWorkLock) {

    public void setPauseWork(boolean pauseWork) {
        synchronized (mPauseWorkLock) {
            mPauseWork = pauseWork;
            if (!mPauseWork) {

    public void setExitTasksEarly(boolean exitTasksEarly) {
        mExitTasksEarly = exitTasksEarly;

    private void setImageBitmap(ImageView imageView, Bitmap bitmap) {

    protected abstract Bitmap downLoadBitmap(String    mUrl);

setPauseWork method is a thread control interface for picture loading. pauseWork controls the pause and continuation of the picture module. Generally, in listView and other controls, stop loading pictures when sliding to ensure smooth sliding. In addition, specific image downloading and decoding are strongly related to the business. Therefore, there is no specific implementation in ImageLoader, but an abstract method of defining a class.

MiniImageLoader is a single example, which ensures that an application maintains only one ImageLoader, reduces object overhead, and manages all image loads in the application. The MiniImageLoader code is as follows:

public class MiniImageLoader extends ImageLoader {
    private volatile static MiniImageLoader sMiniImageLoader = null;
    private ImageCache mImageCache = null;
    public static MiniImageLoader getInstance() {
        if (null == sMiniImageLoader) {
            synchronized (MiniImageLoader.class) {
                MiniImageLoader tmp = sMiniImageLoader;
                if (tmp == null) {
                    tmp = new MiniImageLoader();
                sMiniImageLoader = tmp;
        return sMiniImageLoader;
    public MiniImageLoader() {
        mImageCache = new ImageCache();
    protected Bitmap downLoadBitmap(String mUrl) {
        HttpURLConnection urlConnection = null;
        InputStream in = null;
        try {
            final URL url = new URL(mUrl);
            urlConnection = (HttpURLConnection) url.openConnection();
            in = urlConnection.getInputStream();
            Bitmap bitmap = decodeSampledBitmapFromStream(in, null);
            return bitmap;
        } catch (MalformedURLException e) {
        } catch (IOException e) {
        } finally {
            if (urlConnection != null) {
                urlConnection = null;
            if (in != null) {
                try {
                } catch (IOException e) {

        return null;
    public Bitmap decodeSampledBitmapFromStream(InputStream is, BitmapFactory.Options options) {
        return BitmapFactory.decodeStream(is, null, options);

volatile ensures that objects are loaded from main memory. Moreover, there are too many try... Cache levels above. There is a Closeable interface in Java, which identifies an object that can be closed. Therefore, the following tool classes can be written:

public class CloseUtils {

    public static void closeQuietly(Closeable closeable) {
        if (null != closeable) {
            try {
            } catch (IOException e) {

After transformation, the following is shown:

finally {
    if  (urlConnection != null) {

At the same time, in order to make the ListView smoother during sliding, pause the picture loading during sliding and reduce system overhead. The code is as follows:

listView.setOnScrollListener(new AbsListView.OnScrollListener() {
    public void onScrollStateChanged(AbsListView absListView, int scrollState) {
        if (scorllState == AbsListView.OnScrollListener.SCROLL_STAE_FLING) {
        } else {

2. Single picture memory optimization

Here, a BitmapConfig class is used to configure parameters. The code is as follows:

public class BitmapConfig {

    private int mWidth, mHeight;
    private Bitmap.Config mPreferred;

    public BitmapConfig(int width, int height) {
        this.mWidth = width;
        this.mHeight = height;
        this.mPreferred = Bitmap.Config.RGB_565;

    public BitmapConfig(int width, int height, Bitmap.Config preferred) {
        this.mWidth = width;
        this.mHeight = height;
        this.mPreferred = preferred;

    public BitmapFactory.Options getBitmapOptions() {
        return getBitmapOptions(null);

    // Accurate calculation requires picture is streaming decoding, and then calculate the aspect ratio
    public BitmapFactory.Options getBitmapOptions(InputStream is) {
        final BitmapFactory.Options options = new BitmapFactory.Options();
        options.inPreferredConfig = Bitmap.Config.RGB_565;
        if (is != null) {
            options.inJustDecodeBounds = true;
            BitmapFactory.decodeStream(is, null, options);
            options.inSampleSize = calculateInSampleSize(options, mWidth, mHeight);
        options.inJustDecodeBounds = false;
        return options;

    private static int calculateInSampleSize(BitmapFactory.Options    options, int mWidth, int mHeight) {
        final int height = options.outHeight;
        final int width = options.outWidth;
        int inSampleSize = 1;
        if (height > mHeight || width > mWidth) {
            final int halfHeight = height / 2;
            final int halfWidth = width / 2;
            while ((halfHeight / inSampleSize) > mHeight
                    && (halfWidth / inSampleSize) > mWidth) {
                inSampleSize *= 2;
        return inSampleSize;

Then, call the downLoadBitmap method of MiniImageLoader to increase the steps to get BitmapFactory.Options:

final URL url = new URL(urlString);
urlConnection = (HttpURLConnection) url.openConnection();
in = urlConnection.getInputStream();
final BitmapFactory.Options options =    mConfig.getBitmapOptions(in);
urlConnection = (HttpURLConnection)    url.openConnection();
in = urlConnection.getInputStream();
Bitmap bitmap = decodeSampledBitmapFromStream(in,    options);

There are still some problems after optimization:

  • 1. The same picture should be reloaded every time;
  • 2. The overall memory overhead is uncontrollable. Although the overhead of a single picture is reduced, the lack of reasonable management mechanism still has a serious impact on the performance in the case of a large number of slices.

In order to solve these two problems, we need to have the design concept of memory pool, control the overall picture memory through the memory pool, and do not reload and decode the displayed pictures.

2. Implement L3 cache

Memory -- local -- Network

1. Memory cache

Using soft reference or weak reference to implement memory pool is a common practice in the past, but it is not recommended now. Starting from API 9 (Android 2.3), the Android system garbage collector tends to recycle objects with soft and weak references, so it is not very reliable. Starting from Android 3.0 (API 11), the image data cannot be released in an unexpected way, which poses a potential risk of memory overflow. Using LruCache to realize memory management is a reliable way. Its main algorithm principle is to store the recently used objects in LinkedHashMap with strong references, and remove the least recently used objects from memory before the cache value reaches the preset value. The code of using LruCache to realize the memory cache of an image is as follows:

public class MemoryCache {

    private final int DEFAULT_MEM_CACHE_SIZE = 1024 * 12;
    private LruCache<String, Bitmap> mMemoryCache;
    private final String TAG = "MemoryCache";
    public MemoryCache(float sizePer) {

    private void init(float sizePer) {
        int cacheSize = DEFAULT_MEM_CACHE_SIZE;
        if (sizePer > 0) {
            cacheSize = Math.round(sizePer * Runtime.getRuntime().maxMemory() / 1024);

        mMemoryCache = new LruCache<String, Bitmap>(cacheSize) {
            protected int sizeOf(String key, Bitmap value) {
                final int bitmapSize = getBitmapSize(value) / 1024;
                return bitmapSize == 0 ? 1 : bitmapSize;

            protected void entryRemoved(boolean evicted, String key, Bitmap oldValue, Bitmap newValue) {
               super.entryRemoved(evicted, key, oldValue, newValue);

    public int getBitmapSize(Bitmap bitmap) {
        if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT) {
            return bitmap.getAllocationByteCount();
            return bitmap.getByteCount();

        return bitmap.getRowBytes() * bitmap.getHeight();

    public Bitmap getBitmap(String url) {
        Bitmap bitmap = null;
        if (mMemoryCache != null) {
            bitmap = mMemoryCache.get(url);
        if (bitmap != null) {
            Log.d(TAG, "Memory cache exiet");

        return bitmap;

    public void addBitmapToCache(String url, Bitmap bitmap) {
        if (url == null || bitmap == null) {

        mMemoryCache.put(url, bitmap);

    public void clearCache() {
        if (mMemoryCache != null) {

What percentage of cacheSize in the above code is appropriate? It can be considered based on the following points:

  • 1. Memory usage in the application. Besides pictures, is there any data in large memory that needs to be cached in memory.
  • 2. In most applications, how many pictures should be displayed at the same time, and give priority to ensuring the cache support of the maximum number of pictures.
  • 3. According to the bitmap specification, calculate the memory occupied by a picture.
  • 4. Frequency of image access.

In an application, if some images are accessed more frequently than others, or must be displayed all the time, they need to be kept in memory. In this case, multiple LruCache objects can be used to manage multiple groups of bitmaps, classify bitmaps, and put bitmaps of different levels into different LruCache.

2. bitmap memory reuse

Since Android 3.0, bitmap supports memory reuse, that is, the bitmapfact.options.inbitmap attribute. If this attribute is set to a valid target object, the decode method reuses the existing bitmap when loading content, which means that the memory of bitmap is reused, which can reduce memory allocation and recycling and improve the performance of pictures. The code is as follows:

        mReusableBitmaps = Collections.synchronizedSet(newHashSet<SoftReference<Bitmap>>());

Because the inBitmap attribute is only supported after Android 3.0, a soft reference set is added to the entryRemoved method as the reused source object, which was directly deleted before. The code is as follows:

    mReusableBitmaps.add(new SoftReference<Bitmap>(oldValue));

Similarly, when it is judged above 3.0 that a new bitmap object needs to be allocated, first check whether there are reusable bitmap objects:

public static Bitmap decodeSampledBitmapFromStream(InputStream is, BitmapFactory.Options options, ImageCache cache) {
         addInBitmapOptions(options, cache);
     return BitmapFactory.decodeStream(is, null, options);

private static void addInBitmapOptions(BitmapFactory.Options options, ImageCache cache) {
     options.inMutable = true;
     if (cache != null) {
         Bitmap inBitmap = cache.getBitmapFromReusableSet(options);
         if (inBitmap != null) {
             options.inBitmap = inBitmap;


Next, we use the cache.getBitmapForResubleSet method to find an appropriate bitmap assigned to inBitmap. The code is as follows:

// Get inBitmap to realize memory reuse
public Bitmap getBitmapFromReusableSet(BitmapFactory.Options options) {
    Bitmap bitmap = null;

    if (mReusableBitmaps != null && !mReusableBitmaps.isEmpty()) {
        final Iterator<SoftReference<Bitmap>> iterator = mReusableBitmaps.iterator();
        Bitmap item;

        while (iterator.hasNext()) {
            item =;

            if (null != item && item.isMutable()) {
                if (canUseForInBitmap(item, options)) {

                    Log.v("TEST", "canUseForInBitmap!!!!");

                    bitmap = item;

                    // Remove from reusable set so it can't be used again
            } else {
                // Remove from the set if the reference has been cleared.

    return bitmap;

The above method finds the Bitamp available in the specification from the soft reference collection as the memory reuse object, because there are some restrictions on using inBitmap. Before Android 4.4, only bitmaps of the same size are supported. Therefore, the canUseForInBitmap method is used to determine whether the Bitmap can be reused. The code is as follows:

private static boolean canUseForInBitmap(
        Bitmap candidate, BitmapFactory.Options targetOptions) {

        return candidate.getWidth() == targetOptions.outWidth
                && candidate.getHeight() == targetOptions.outHeight
                && targetOptions.inSampleSize == 1;
    int width = targetOptions.outWidth / targetOptions.inSampleSize;
    int height = targetOptions.outHeight / targetOptions.inSampleSize;

    int byteCount = width * height * getBytesPerPixel(candidate.getConfig());

    return byteCount <= candidate.getAllocationByteCount();

3. Disk cache

Since the disk reading time is unpredictable, the picture decoding and file reading should be completed in the background process. DisLruCache is a class provided by Android to manage disk cache.

1. First, call the open method of DiskLruCache for initialization. The code is as follows:

public static DiskLruCache open(File directory, int appVersion, int valueCou9nt, long maxSize)

directory generally recommends caching to SD card. When the appVersion changes, the data of the previous version will be automatically deleted. valueCount refers to the corresponding relationship between Key and Value. Generally, it is a 1-to-1 relationship. maxSize is the maximum cached data size for cached pictures. The code for initializing DiskLruCache is as follows:

private void init(final long cacheSize,final File cacheFile) {
    new Thread(new Runnable() {
        public void run() {
            synchronized (mDiskCacheLock) {
                MLog.d(TAG,"Init DiskLruCache cache path:" + cacheFile.getPath() + "\r\n" + "Disk Size:" + cacheSize);
                try {
                    mDiskLruCache =, MiniImageLoaderConfig.VESION_IMAGELOADER, 1, cacheSize);
                    mDiskCacheStarting = false;
                    // Finished initialization
                    // Wake any waiting threads
                }catch(IOException e){
                    MLog.e(TAG,"Init err:" + e.getMessage());

If write or read operations are required before initialization, they will fail. Therefore, the wait/notifyAll mechanism of Object is used in the whole DiskCache to avoid synchronization problems.

2. Write to DiskLruCache

First, get the Editor instance. It needs to pass in a key to get parameters. The key must have a unique correspondence with the picture. However, because the characters in the URL may bring character types that are not supported by the file name, take the MD4 value of the URL as the file name to realize the correspondence between the key and the picture. The code for obtaining the MD5 value through the URL is as follows:

private String hashKeyForDisk(String key) {
    String cacheKey;
    try {
        final MessageDigest mDigest = MessageDigest.getInstance("MD5");
        cacheKey = bytesToHexString(mDigest.digest());
    } catch (NoSuchAlgorithmException e) {
        cacheKey = String.valueOf(key.hashCode());
    return cacheKey;
private String bytesToHexString(byte[] bytes) {
    StringBuilder sb = new StringBuilder();
    for (int i = 0; i < bytes.length; i++) {
        String hex = Integer.toHexString(0xFF & bytes[i]);
        if (hex.length() == 1) {
    return sb.toString();

Then, write the picture data to be saved. The overall code for writing the picture data to the local cache is as follows:

 public void saveToDisk(String imageUrl, InputStream in) {
    // add to disk cache
    synchronized (mDiskCacheLock) {
        try {
            while (mDiskCacheStarting) {
                try {
                } catch (InterruptedException e) {}
            String key = hashKeyForDisk(imageUrl);
            MLog.d(TAG,"saveToDisk get key:" + key);
            DiskLruCache.Editor editor = mDiskLruCache.edit(key);
            if (in != null && editor != null) {
                // When valueCount is specified as 1, index can be passed to 0
                OutputStream outputStream = editor.newOutputStream(0);
                MLog.d(TAG, "saveToDisk");
                if (FileUtil.copyStream(in,outputStream)) {
                    MLog.d(TAG, "saveToDisk commit start");
                    MLog.d(TAG, "saveToDisk commit over");
                } else {
                    MLog.e(TAG, "saveToDisk commit abort");
        } catch (IOException e) {


Then, read the image cache and implement it through the get method of DiskLruCache. The code is as follows:

public Bitmap  getBitmapFromDiskCache(String imageUrl,BitmapConfig bitmapconfig) {
    synchronized (mDiskCacheLock) {
        // Wait while disk cache is started from background thread
        while (mDiskCacheStarting) {
            try {
            } catch (InterruptedException e) {}
        if (mDiskLruCache != null) {
            try {

                String key = hashKeyForDisk(imageUrl);
                MLog.d(TAG,"getBitmapFromDiskCache get key:" + key);
                DiskLruCache.Snapshot snapShot = mDiskLruCache.get(key);
                if(null == snapShot){
                    return null;
                InputStream is = snapShot.getInputStream(0);
                if(is != null){
                    final BitmapFactory.Options options = bitmapconfig.getBitmapOptions();
                    return BitmapUtil.decodeSampledBitmapFromStream(is, options);
                    MLog.e(TAG,"is not exist");
            }catch (IOException e){
                MLog.e(TAG,"getBitmapFromDiskCache ERROR");
    return null;

Finally, it should be noted that reading and decoding Bitmap data and saving picture data are time-consuming IO operations. So these methods are invoked in the doInBackground method in ImageLoader. The code is as follows:

protected Bitmap doInBackground(Void... params) {
    Bitmap bitmap = null;
    synchronized (mPauseWorkLock) {
        while (mPauseWork && !isCancelled()) {
            try {
            } catch (InterruptedException e) {
    if (bitmap == null && !isCancelled()
            && imageViewReference.get() != null && !mExitTasksEarly) {
        bitmap = getmImageCache().getBitmapFromDisk(mUrl, mBitmapConfig);

    if (bitmap == null && !isCancelled()
            && imageViewReference.get() != null && !mExitTasksEarly) {
        bitmap = downLoadBitmap(mUrl, mBitmapConfig);
    if (bitmap != null) {
        getmImageCache().addToCache(mUrl, bitmap);

    return bitmap;

3. Picture loading tripartite Library

At present, Picasso, glass and Fresco are the most widely used. Glass is similar to Picasso, but compared with Picasso, glass has richer functions and more complex internal implementation. Students interested in glass can read this article Source code analysis of Android mainstream third-party libraries (III. in depth understanding of Glide source code) . The biggest highlight of Fresco is its memory management, especially on low-end machines and machines below Android 5.0. Using Fresco will well solve the problem of large memory occupied by pictures. Because Fresco will put the picture in a special memory area. When the picture is no longer displayed, the occupied memory will be released automatically. These summarize the advantages of Fresco as follows:

  • 1. Memory management.
  • 2. Progressive presentation: first present a rough picture outline, and then present a gradually clear picture as the picture download continues.
  • 3. Support more image formats, such as Gif and Webp.
  • 4. Rich image loading strategies: the Image Pipeline can specify different remote paths for the same image, such as displaying the images already in the local cache first, and displaying the HD atlas after the HD image download is completed.


The installation package is too large, so it is recommended to use Glide when the requirements for image loading and display are not high.

6, Summary

Memory optimization is generally checked by using tools such as MAT and monitored by using memory leakage monitoring tools such as LeakCanary, so as to find problems, analyze the causes of problems, solve the problems found, or optimize the current implementation logic. After optimization, check until the predetermined performance index is reached. In the next article, we will deeply explore Android memory optimization with you. Please look forward to it~

Thank you for reading this article. I hope you can share it with your friends or technology group, which is of great significance to me.

This article is transferred from , in case of infringement, please contact to delete.

Relevant video recommendations:

Android performance optimization learning [1]: APK slimming optimization_ Beep beep beep_ bilibili

Android performance optimization learning [2]: APP startup speed optimization_ Beep beep beep_ bilibili

Android performance optimization [3]: how to solve the OOM problem_ Beep beep beep_ bilibili

Android performance optimization learning [4]: UI Caton optimization_ Beep beep beep_ bilibili

Tags: Android Programmer

Posted on Mon, 22 Nov 2021 22:54:17 -0500 by kir10s