Android | teaches you how to use 30 minutes to develop a smile grabber on Android


Recently, Richard Yu introduced Huawei HMS core 4.0 to you at the conference. Please check the information of the conference:
  What does Huawei mean by releasing HMS Core 4.0 to the world?


What can machine learning service do? What problems can developers solve in the process of application development?

Today, I'd like to give you an example of face detection, to give you a small practical example, so that you can feel the powerful functions provided by machine learning services and the convenience provided to developers.

The ability of machine learning service for face detection

First, let me show you the face detection capability of Huawei machine learning service:

                                  Besides, it also supports multi face detection at the same time, isn't it very powerful!


Practical development of multi face smile photo taking function

Today, I will use the multi face recognition and expression detection capabilities of machine learning service to write a small demo of a smile capture and do a real-world exercise. Please stamp here for download of demo source github

1. Development preparation


1.1 add Huawei maven warehouse in project level gradle

Add the following maven address incrementally:

buildscript {
    repositories {        
        maven {url ''}
    }    }allprojects {
    repositories {       
        maven { url ''}

1.2 at application level build.gradle SDK dependency added

   introduce SDK and basic SDK of face recognition:

  // Introducing basic SDK 
  implementation 'com.huawei.hms:ml-computer-vision:' 
  // Introducing face detection capability package 
  implementation 'com.huawei.hms:ml-computer-vision-face-recognition-model:'   

1.3 in AndroidManifest.xml Automatic download of incremental add model in file


           android:value= "face"/>                  

1.4 in AndroidManifest.xml Apply for camera and storage permission in the document

<!--Camera permissions--><uses-permission android:name="android.permission.CAMERA" /><!--Use storage rights--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

2. Code development

2.1 create a face analyzer and take photos when a smile is detected

Take photos after testing:

  1. Configure analyzer parameters
  2. Pass analyzer parameter configuration to analyzer
  3. stay analyzer.setTransacto By rewriting transactResult to deal with the content after face recognition, the face recognition will return a smile confidence degree (which can be simply understood as the probability of smile), as long as the setting is greater than a certain confidence degree to take photos.
private MLFaceAnalyzer analyzer;private void createFaceAnalyzer() {
    MLFaceAnalyzerSetting setting =
            new MLFaceAnalyzerSetting.Factory()
    this.analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
    this.analyzer.setTransactor(new MLAnalyzer.MLTransactor<MLFace>() {
        @Override        public void destroy() {

        @Override        public void transactResult(MLAnalyzer.Result<MLFace> result) {
            SparseArray<MLFace> faceSparseArray = result.getAnalyseList();
            int flag = 0;
            for (int i = 0; i < faceSparseArray.size(); i++) {
                MLFaceEmotion emotion = faceSparseArray.valueAt(i).getEmotions();
                if (emotion.getSmilingProbability() > smilingPossibility) {
            if (flag > faceSparseArray.size() * smilingRate && safeToTakePicture) {
                safeToTakePicture = false;

   photo storage part:

private void takePhoto() {
            new LensEngine.PhotographListener() {
                @Override                public void takenPhotograph(byte[] bytes) {
                    Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);

2.2 create a visual engine to capture the camera's dynamic video stream and send it to the analyzer

private void createLensEngine() {
    Context context = this.getApplicationContext();
    // Create LensEngine
    this.mLensEngine = new LensEngine.Creator(context, this.analyzer).setLensType(this.lensType)
            .applyDisplayDimension(640, 480)

2.3 dynamic permission application, hook analyzer and visual engine creation code

@Overridepublic void onCreate(Bundle savedInstanceState) {
    if (savedInstanceState != null) {
        this.lensType = savedInstanceState.getInt("lensType");
    this.mPreview = this.findViewById(;
    // Checking Camera Permissions
    if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
    } else {
    private void requestCameraPermission() {
    final String[] permissions = new String[]{Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE};

    if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
        ActivityCompat.requestPermissions(this, permissions, LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE);
    }}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
                                       @NonNull int[] grantResults) {
    if (requestCode != LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults);
    if (grantResults.length != 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {


How about   is the development process particularly simple? You can develop a new feature in 30 minutes! Let's experience the effect of multi face smiling bag grabbing.

Single smiling face snapshot:

Photo of smiling faces of many people:

Based on the ability of face detection, what other functions can be done? Please open your brain hole! Here are some tips, such as:

  1. By identifying the location of facial features such as ears, eyes, nose, mouth and eyebrows, some interesting decorative effects are added.
  2. By recognizing the contour of the face, making some exaggerated deformation and stretching, generating some interesting and interesting portrait pictures, or developing the beauty function for the contour area.
  3. Through age identification, the children are addicted to the pain points of electronic products, and some parental control functions are developed.
  4. The characteristics of eye protection tips are developed by detecting the length of time that the eyes stare at the screen.
  5. Through random instructions (shaking head, blinking, mouth opening, etc.) to achieve the user action matching live detection function.
  6. Through the comprehensive use of the user's age, gender and other test results, to make recommendations for users;

Refer to the official website of Huawei developer Alliance for more detailed development guide
Huawei developer alliance machine learning service development guide

Content source:
Original author: AI_talking

Tags: Android Maven SDK Gradle

Posted on Tue, 26 May 2020 05:48:32 -0400 by qrt123