Memory configuration and timeout of Serverless

When we use the Serverless architecture, how to set the running memory and timeout?

In the last article Resource evaluation and cost exploration of Serverless When we use the Serverless architecture, how to set the running memory and timeout? Here I would like to share my assessment methods for your reference.

First, when the function is online, select a slightly larger memory. For example, execute the function once to get the following result:

Then set my function to 128M or 256M and the timeout to 3S.

Let the function run for a period of time, for example, the interface triggers about 4000 times a day:

Take out the log of this function and write it into a script for statistics:

    import json, time, numpy, base64
    import matplotlib.pyplot as plt
    from matplotlib import font_manager
    from tencentcloud.common import credential
    from tencentcloud.common.profile.client_profile import ClientProfile
    from tencentcloud.common.profile.http_profile import HttpProfile
    from tencentcloud.common.exception.tencent_cloud_sdk_exception import TencentCloudSDKException
    from tencentcloud.scf.v20180416 import scf_client, models
    
    secretId = ""
    secretKey = ""
    region = "ap-guangzhou"
    namespace = "default"
    functionName = "course"
    
    font = font_manager.FontProperties(fname="./fdbsjw.ttf")
    
    try:
        cred = credential.Credential(secretId, secretKey)
        httpProfile = HttpProfile()
        httpProfile.endpoint = "scf.tencentcloudapi.com"
    
        clientProfile = ClientProfile()
        clientProfile.httpProfile = httpProfile
        client = scf_client.ScfClient(cred, region, clientProfile)
    
        req = models.GetFunctionLogsRequest()
    
        strTimeNow = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(int(time.time())))
        strTimeLast = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(int(time.time()) - 86400))
        params = {
            "FunctionName": functionName,
            "Limit": 500,
            "StartTime": strTimeLast,
            "EndTime": strTimeNow,
            "Namespace": namespace
        }
        req.from_json_string(json.dumps(params))
    
        resp = client.GetFunctionLogs(req)
    
        durationList = []
        memUsageList = []
    
        for eveItem in json.loads(resp.to_json_string())["Data"]:
            durationList.append(eveItem['Duration'])
            memUsageList.append(eveItem['MemUsage'] / 1024 / 1024)
    
        durationDict = {
            "min": min(durationList),  # Minimum running time
            "max": max(durationList),  # Maximum running time
            "mean": numpy.mean(durationList)  # Average operation time
        }
        memUsageDict = {
            "min": min(memUsageList),  # Minimum memory usage
            "max": max(memUsageList),  # Maximum memory usage
            "mean": numpy.mean(memUsageList)  # Average memory usage
        }
    
        plt.figure(figsize=(10, 15))
        plt.subplot(4, 1, 1)
        plt.title('Operation times and operation time chart', fontproperties=font)
        x_data = range(0, len(durationList))
        plt.plot(x_data, durationList)
        plt.subplot(4, 1, 2)
        plt.title('Distribution chart of operation time', fontproperties=font)
        plt.hist(durationList, bins=20)
        plt.subplot(4, 1, 3)
        plt.title('Run times and memory usage graph', fontproperties=font)
        x_data = range(0, len(memUsageList))
        plt.plot(x_data, memUsageList)
        plt.subplot(4, 1, 4)
        plt.title('Direct distribution of memory usage', fontproperties=font)
        plt.hist(memUsageList, bins=20)


​        with open("/tmp/result.png", "rb") as f:
​            base64_data = base64.b64encode(f.read())
​    

        print("-" * 10 + "Operation time related data" + "-" * 10)
        print("Minimum running time:\t", durationDict["min"], "ms")
        print("Maximum running time:\t", durationDict["max"], "ms")
        print("Average operation time:\t", durationDict["mean"], "ms")
    
        print("\n")
    
        print("-" * 10 + "Memory usage related data" + "-" * 10)
        print("Minimum memory usage:\t", memUsageDict["min"], "MB")
        print("Maximum memory usage:\t", memUsageDict["max"], "MB")
        print("Average memory usage:\t", memUsageDict["mean"], "MB")
    
        print("\n")
    
        plt.show(dpi=200)
​    except TencentCloudSDKException as err:
​        print(err)

Operation result:

    ----------Operation time related data----------
    Minimum operation time: 6.02 ms
    Maximum operation time: 211.22 ms
    Average operation time: 54.79572 ms
​    
    ----------Memory usage related data----------
    Minimum memory usage: 17.94921875 MB
    Maximum memory usage: 37.21875190734863 MB
    Average memory usage: 24.83201559448242 MB

From this result, we can see clearly that the time consumption and memory usage of each function are nearly 500 times.

It can be seen that the time consumption is basically less than 1S, so it is reasonable to set the "timeout" to 1S here, while the memory usage is basically less than 64M, so the memory can be set to 64M at this time.

For another example, for another function:

    ----------Operation time related data----------
    Minimum operation time: 63445.13 ms
    Maximum operation time: 442629.12 ms
    Average operation time: 91032.31301886792 ms

​    
Data related to memory usage----------
Minimum memory usage: 26.875 MB
 Maximum memory usage: 58.69140625 MB
 Average memory usage: 36.270415755937684 MB

If the previous function is a very stable and smooth function, and it is easy to estimate the resource utilization rate, then this function can clearly see the fluctuation.

Most of the running time is below 150S, some is less than 200S, and the highest peak value is nearly 450S. At this time, we can determine whether the request peak of 450S can be suspended according to the business requirements. At this point, I recommend setting the timeout of this function to 200S.

As for the memory part, you can see that most of them are within 40MB, some of them are within 45-55MB, and the maximum is 60MB, so you can set the function to 64MB at this time.

For now, cloud functions may fluctuate in execution. Therefore, it is normal for memory usage or timeout to fluctuate within the range. We can make some settings according to business requirements to minimize resource usage and save costs.

My approach is basically divided into two steps:

  1. Simply run it twice, evaluate the basic resource usage, and then set a higher value;
  2. After the function runs for a period of time, the samples are obtained, then the basic data analysis and data visualization are carried out, and a relatively stable new value is obtained through optimization.

Portal:

Welcome to: Serverless Chinese network , you can Best practices Experience more about Serverless application development!

Tags: Front-end JSON less github network

Posted on Wed, 11 Mar 2020 05:49:07 -0400 by Maharg105