Avatar animation - wechat applet + flame back end calls AnimeGanV2

I wrote an article before< Surpassing the previous work and realizing the transfer of animation style -- AnimeGANv2 >, it is mentioned that AnimeGANv2 is used to realize character animation and generate a unique and precious avatar. However, the operation is a little troublesome, so I hope to make a small program to generate my own animation avatar directly on the mobile phone. The following is the display effect!!!

1, Core function design

What the applet wants to achieve is to animate wechat avatars or photos in selected photo albums. Therefore, after dismantling the requirements, the core functions of sorting are as follows:

  1. Authorize login to obtain avatar and nickname
  2. Select pictures in album
  3. Click the animation button to call the flash backend to generate an image
  4. Save image

2, Implementation steps of wechat applet front end

First, create a blank wechat applet project. Please refer to the previous steps for detailed steps< Python + wechat applet development (I) understanding and environment construction >Article.

1. Login interface

On the pages/index/index.wxml design page:

<view wx:if="{{canIUse}}">
    <view class='header'>
        <view class="userinfo-avatar">
            <open-data type="userAvatarUrl"></open-data>
    <view class="content">
        <view>Apply for the following permissions</view>
        <text>Get your public information (nickname, avatar, etc.)</text>
    <button wx:if="{{canIUse}}" class="loginBtn" type="primary"  lang="zh_CN" bindtap="bindGetUserProfile" >
        Authorized login

Add user information verification in pages/index/index.js:

    bindGetUserProfile(e)     //When the user clicks the authorized login button, the bindGetUserInfo function is triggered
      var that=this
          desc: 'Used to improve member information', // Declare the purpose of obtaining the user's personal information, which will be displayed in the pop-up window later. Please fill in carefully
          success: (res) => {
          // console.log(res.userInfo)
          var avantarurl=res.userInfo.avatarUrl; 
            url: '../../pages/change/change?url='+ avantarurl ,

The url of the avatar is passed to the avatar interface.

The effects are as follows:


2. Avatar page

Select photos and avatar animation on this page.

On the pages/avantar/avantar.wxml design page:

<view class='preview'>
    <view class="Imgtag">
        <image class="tag" src='{{prurl}}' mode='aspectFit'></image>
    <view class="bottomAll">
        <button bindtap='selectImg' class="saveBtn">Select Picture</button>
        <button bindtap='generateAvantar' class="saveBtn">Animation</button>
        <button bindtap='save' class="saveBtn">Save Avatar</button>

Define the function in pages / avatar / avatar.js:

The onload function receives the url passed by index.

  onLoad: function (options) {
        // console.log(options.url)
        var path = this.headimgHD(options.url)
            // image1:path,
            // baseURL:path

  The chooseImage function selects images.

  chooseImage() {
    var that = this;
      itemList: ['Select from album', 'photograph'],
      itemColor: "#FAD143",
      success: function (res) {
        if (!res.cancel) {
            title: 'Reading...',
          if (res.tapIndex == 0) {
            that.chooseWxImage1('album', 1)
          } else if (res.tapIndex == 1) {
            that.chooseWxImage1('camera', 1)

The savePic function saves the photo.

  savePic(e) {
    let that = this
    var baseImg = that.data.baseImg
    //Save picture
    var save = wx.getFileSystemManager();
    var number = Math.random();
      filePath: wx.env.USER_DATA_PATH + '/pic' + number + '.png',
      data: baseImg,
      encoding: 'base64',
      success: res => {
          filePath: wx.env.USER_DATA_PATH + '/pic' + number + '.png',
          success: function (res) {
              title: 'Saved successfully',
          fail: function (err) {
      fail: err => {

generateAvantar function calls postdata function to realize avatar animation.  

      var that = this
        url: '',
        filePath: that.data.prurl,
        name: 'content',
        success: function (res) {
          var resurl=JSON.parse(res.data)['resurl']

            prurl: resurl
          if (res) {

              title: 'convert network',
              duration: 3000
        fail: (res) =>{

3, Implementation steps of flash backend

1. Configure RESTful routing method

@app.route('/postdata', methods=['POST'])
def postdata():
    f = request.files['content']
    user_input = request.form.get("name")
    basepath = os.path.dirname(__file__)  # Current file path
    src_imgname = str(uuid.uuid1()) + ".jpg"
    upload_path = os.path.join(basepath, 'static/srcImg/')
    if os.path.exists(upload_path)==False:
    f.save(upload_path + src_imgname)
    # img = cv2.imread(upload_path + src_imgname, 1)

    save_path = os.path.join(basepath, 'static/resImg/')
    if os.path.exists(save_path) == False:
    resSets["value"] = 10
    resSets["resurl"] = "" +'/static/resImg/' + src_imgname
    return json.dumps(resSets, ensure_ascii=False)

The code mainly accepts the image url from the front end, processes it and passes it back through json.

2. Call AnimeGanv2 to realize animation

net = Generator()
net.load_state_dict(torch.load(args.checkpoint, map_location="cpu"))
# print(f"model loaded: {args.checkpoint}")

# os.makedirs(args.output_dir, exist_ok=True)
def load_image(image_path, x32=False):
    img = cv2.imread(image_path).astype(np.float32)
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    h, w = img.shape[:2]

    if x32: # resize image to multiple of 32s
        def to_32s(x):
            return 256 if x < 256 else x - x%32
        img = cv2.resize(img, (to_32s(w), to_32s(h)))

    img = torch.from_numpy(img)
    img = img/127.5 - 1.0
    return img

def generateAvantar(src_imgname,upload_path,save_path):
    image = load_image((upload_path+src_imgname), args.x32)
    with torch.no_grad():
        input = image.permute(2, 0,                     1).unsqueeze(0).to(args.device)
        out = net(input, args.upsample_align).squeeze(0).permute(1, 2, 0).cpu().numpy()
        out = (out + 1)*127.5
        out = np.clip(out, 0, 255).astype(np.uint8)
    cv2.imwrite(os.path.join(save_path, src_imgname), cv2.cvtColor(out, cv2.COLOR_BGR2RGB))

The code mainly calls AnimeGanv2 to realize image animation.    

Finally, the effect is achieved:



In fact, this applet is not very difficult to implement. You only need to configure the basic deep learning environment and flash programming. After understanding some basic APIs of the applet, you can develop it. You can try it if you have time. I have set up the background. You can use it directly and see the effect. If you have any questions, you can leave a message in the comment area or contact me through the link below.

Tags: Back-end Mini Program Deep Learning Flask

Posted on Mon, 06 Dec 2021 19:52:26 -0500 by jkohns