Using Huawei HMS MLKit SDK to develop a smile grabber on Android in 30 minutes

Using Huawei HMS MLKit SDK to develop a smile grabber on Android in 30 minutes
Intron
The ability of machine learning service for face detection
==Core tip: this function is free and covered by Android! = =
Practical development of multi face smile photo taking function
1 development preparation
1.1 add Huawei maven warehouse in project level gradle
1.2 at application level build.gradle SDK dependency added
1.3 in AndroidManifest.xml Automatic download of incremental add model in file
1.4 in AndroidManifest.xml Apply for camera and storage permission in the document
2 code development
2.1 create a face analyzer and take photos when a smile is detected
2.2 create a visual engine to capture the camera's dynamic video stream and send it to the analyzer
2.3 dynamic permission application, hook analyzer and visual engine creation code
Postscript
Next notice

Intron

Richard Yu introduced Huawei HMS core 4.0 to you at the press conference a few days ago. Please check the press conference information:
What does Huawei mean by releasing HMS Core 4.0 to the world?

One of the key services is Machine Learning Kit (MLKit).
What can machine learning service do? What problems can developers solve in the process of application development?
Today, I'd like to give you a small practical example, taking face detection as an example, so that you can feel the powerful functions provided by machine learning services and the convenience provided to developers.

The ability of machine learning service for face detection
Let me show you the face detection capability of Huawei machine learning service

It can be seen from this moving picture that face recognition can support the recognition of face orientation, the detection of facial expression (happy, disgusted, surprised, sad, angry, angry), the detection of face attributes (gender, age, wearing), the detection of whether the eyes are open and closed, the coordinate detection of face, nose, eyes, lips, eyebrows and other features, as well as Support multi face detection at the same time, isn't it very powerful!

Core tip: this function is free and covered by Android!

Practical development of multi face smile photo taking function

Today, we will use the multi face recognition + expression detection capabilities of machine learning service to write a small demo of a smile capture, and do a practical exercise.
Download the Github demo source code here: (the project directory is: smile camera)

1 development preparation

The preparation for Huawei HMS kit development is almost the same, except for adding maven dependency and introducing SDK

1.1 add Huawei maven warehouse in project level gradle

Add the following maven addresses incrementally:

buildscript {
    repositories {        
        maven {url 'http://developer.huawei.com/repo/'}
    }    
}
allprojects {
    repositories {       
        maven { url 'http://developer.huawei.com/repo/'}
    }
}

1.2 at application level build.gradle SDK dependency added

Introducing SDK and basic SDK of face recognition

dependencies{ 
  // Introducing basic SDK 
  implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300' 
  // Introducing face detection capability package 
  implementation 'com.huawei.hms:ml-computer-vision-face-recognition-model:1.0.2.300'   
 }

1.3 in AndroidManifest.xml Automatic download of incremental add model in file

This is mainly used for model updating. The later algorithm is optimized and can be automatically downloaded to the mobile phone for updating

<manifest    
   <application  
       <meta-data                     
           android:name="com.huawei.hms.ml.DEPENDENCY"          
           android:value= "face"/>        	        
   </application>
</manifest> 

1.4 in AndroidManifest.xml Apply for camera and storage permission in the document

<!--Camera permissions-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Use storage rights-->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

2 code development
2.1 create a face analyzer and take photos when a smile is detected
Take photos after testing:
1) Configure analyzer parameters
2) Pass analyzer parameter configuration to analyzer
3) At analyzer.setTransacto By rewriting transactResult to deal with the content after face recognition, the face recognition will return a smile confidence (which can be simply understood as the probability of smile), as long as the setting is greater than a certain confidence to take photos.

private MLFaceAnalyzer analyzer;
private void createFaceAnalyzer() {
    MLFaceAnalyzerSetting setting =
            new MLFaceAnalyzerSetting.Factory()
                    .setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
                    .setKeyPointType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_KEYPOINTS)
                    .setMinFaceProportion(0.1f)
                    .setTracingAllowed(true)
                    .create();                 
    this.analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
    this.analyzer.setTransactor(new MLAnalyzer.MLTransactor<MLFace>() {
        @Override
        public void destroy() {
        }

        @Override
        public void transactResult(MLAnalyzer.Result<MLFace> result) {
            SparseArray<MLFace> faceSparseArray = result.getAnalyseList();
            int flag = 0;
            for (int i = 0; i < faceSparseArray.size(); i++) {
                MLFaceEmotion emotion = faceSparseArray.valueAt(i).getEmotions();
                if (emotion.getSmilingProbability() > smilingPossibility) {
                    flag++;
                }
            }
            if (flag > faceSparseArray.size() * smilingRate && safeToTakePicture) {
                safeToTakePicture = false;
                mHandler.sendEmptyMessage(TAKE_PHOTO);
            }
        }
    });
}

Photo storage:

private void takePhoto() {
    this.mLensEngine.photograph(null,
            new LensEngine.PhotographListener() {
                @Override
                public void takenPhotograph(byte[] bytes) {
                    mHandler.sendEmptyMessage(STOP_PREVIEW);
                    Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
                    saveBitmapToDisk(bitmap);
                }
            });
}

2.2 create a visual engine to capture the camera's dynamic video stream and send it to the analyzer

private void createLensEngine() {
    Context context = this.getApplicationContext();
    // Create LensEngine
    this.mLensEngine = new LensEngine.Creator(context, this.analyzer).setLensType(this.lensType)
            .applyDisplayDimension(640, 480)
            .applyFps(25.0f)
            .enableAutomaticFocus(true)
            .create();
}

2.3 dynamic permission application, hook analyzer and visual engine creation code

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    this.setContentView(R.layout.activity_live_face_analyse);
    if (savedInstanceState != null) {
        this.lensType = savedInstanceState.getInt("lensType");
    }
    this.mPreview = this.findViewById(R.id.preview);
    this.createFaceAnalyzer();
    this.findViewById(R.id.facingSwitch).setOnClickListener(this);
    // Checking Camera Permissions
    if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
        this.createLensEngine();
    } else {
        this.requestCameraPermission();
    }
}
    
private void requestCameraPermission() {
    final String[] permissions = new String[]{Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE};

    if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
        ActivityCompat.requestPermissions(this, permissions, LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE);
        return;
    }
}

@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
                                       @NonNull int[] grantResults) {
    if (requestCode != LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults);
        return;
    }
    if (grantResults.length != 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
        this.createLensEngine();
        return;
    }
}

Postscript

How about, is the development process very simple? You can develop a new feature in 30 minutes! Let's experience the effect of multi face smiling bag grabbing.

Single smiling face snapshot:

Photos of smiling faces of many people:

Based on the face detection ability, what other functions can be done? Please open your brain hole! Here are some tips, such as:
1. By identifying the location of facial features such as ears, eyes, nose, mouth and eyebrows, some interesting decorative effects are added.
2. By recognizing the contour of the face, making some exaggerated deformation and stretching, generating some interesting and interesting portrait pictures, or developing the beauty function for the contour area.
3. Through age identification, we can develop some parental control functions for children's addictions to electronic products.
4. The characteristics of eye protection prompt are developed by detecting the length of time that the eyes stare at the screen.
5. Through random instructions (shaking head, blinking, mouth opening, etc.) to achieve the user action matching live detection function.
6. Through the comprehensive use of the user's age, gender and other test results, to make recommendations for users;

For more detailed development guide, please refer to the official website of Huawei developer Alliance:

Based on the face detection ability, what other functions can be done? Please open your brain hole! Here are some tips, such as:
1. By identifying the location of facial features such as ears, eyes, nose, mouth and eyebrows, some interesting decorative effects are added.
2. By recognizing the contour of the face, making some exaggerated deformation and stretching, generating some interesting and interesting portrait pictures, or developing the beauty function for the contour area.
3. Through age identification, we can develop some parental control functions for children's addictions to electronic products.
4. The characteristics of eye protection prompt are developed by detecting the length of time that the eyes stare at the screen.
5. Through random instructions (shaking head, blinking, mouth opening, etc.) to achieve the user action matching live detection function.
6. Through the comprehensive use of the user's age, gender and other test results, to make recommendations for users;

For more detailed development guide, please refer to the official website of Huawei developer Alliance:

Huawei developer alliance machine learning service development guide

Next notice

Based on Huawei's machine learning service, there will be a series of practical experience sharing later, which you can continue to pay attention to~

Tags: Mobile Android SDK Maven Gradle

Posted on Fri, 22 May 2020 06:02:54 -0400 by Lee-Bartlett