Android | teaches you how to use 30 minutes to develop a smile grabber on Android

preface

Recently, Richard Yu introduced Huawei HMS core 4.0 to you at the conference. Please check the information of the conference:
  What does Huawei mean by releasing HMS Core 4.0 to the world?

                      .

What can machine learning service do? What problems can developers solve in the process of application development?

Today, I'd like to give you an example of face detection, to give you a small practical example, so that you can feel the powerful functions provided by machine learning services and the convenience provided to developers.

The ability of machine learning service for face detection

First, let me show you the face detection capability of Huawei machine learning service:

                                  Besides, it also supports multi face detection at the same time, isn't it very powerful!

                  !

Practical development of multi face smile photo taking function

Today, I will use the multi face recognition and expression detection capabilities of machine learning service to write a small demo of a smile capture and do a real-world exercise. Please stamp here for download of demo source github

1. Development preparation

                     .

1.1 add Huawei maven warehouse in project level gradle

Add the following maven address incrementally:

buildscript {
    repositories {        
        maven {url 'http://developer.huawei.com/repo/'}
    }    }allprojects {
    repositories {       
        maven { url 'http://developer.huawei.com/repo/'}
    }}

1.2 at application level build.gradle SDK dependency added

   introduce SDK and basic SDK of face recognition:

dependencies{ 
  // Introducing basic SDK 
  implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300' 
  // Introducing face detection capability package 
  implementation 'com.huawei.hms:ml-computer-vision-face-recognition-model:1.0.2.300'   
 }

1.3 in AndroidManifest.xml Automatic download of incremental add model in file

                    

<manifest    
   <application  
       <meta-data                     
           android:name="com.huawei.hms.ml.DEPENDENCY"          
           android:value= "face"/>                  
   </application></manifest> 

1.4 in AndroidManifest.xml Apply for camera and storage permission in the document

<!--Camera permissions--><uses-permission android:name="android.permission.CAMERA" /><!--Use storage rights--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

2. Code development

2.1 create a face analyzer and take photos when a smile is detected

Take photos after testing:

  1. Configure analyzer parameters
  2. Pass analyzer parameter configuration to analyzer
  3. stay analyzer.setTransacto By rewriting transactResult to deal with the content after face recognition, the face recognition will return a smile confidence degree (which can be simply understood as the probability of smile), as long as the setting is greater than a certain confidence degree to take photos.
private MLFaceAnalyzer analyzer;private void createFaceAnalyzer() {
    MLFaceAnalyzerSetting setting =
            new MLFaceAnalyzerSetting.Factory()
                    .setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
                    .setKeyPointType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_KEYPOINTS)
                    .setMinFaceProportion(0.1f)
                    .setTracingAllowed(true)
                    .create();                 
    this.analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
    this.analyzer.setTransactor(new MLAnalyzer.MLTransactor<MLFace>() {
        @Override        public void destroy() {
        }

        @Override        public void transactResult(MLAnalyzer.Result<MLFace> result) {
            SparseArray<MLFace> faceSparseArray = result.getAnalyseList();
            int flag = 0;
            for (int i = 0; i < faceSparseArray.size(); i++) {
                MLFaceEmotion emotion = faceSparseArray.valueAt(i).getEmotions();
                if (emotion.getSmilingProbability() > smilingPossibility) {
                    flag++;
                }
            }
            if (flag > faceSparseArray.size() * smilingRate && safeToTakePicture) {
                safeToTakePicture = false;
                mHandler.sendEmptyMessage(TAKE_PHOTO);
            }
        }
    });}

   photo storage part:

private void takePhoto() {
    this.mLensEngine.photograph(null,
            new LensEngine.PhotographListener() {
                @Override                public void takenPhotograph(byte[] bytes) {
                    mHandler.sendEmptyMessage(STOP_PREVIEW);
                    Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
                    saveBitmapToDisk(bitmap);
                }
            });}

2.2 create a visual engine to capture the camera's dynamic video stream and send it to the analyzer

private void createLensEngine() {
    Context context = this.getApplicationContext();
    // Create LensEngine
    this.mLensEngine = new LensEngine.Creator(context, this.analyzer).setLensType(this.lensType)
            .applyDisplayDimension(640, 480)
            .applyFps(25.0f)
            .enableAutomaticFocus(true)
            .create();}

2.3 dynamic permission application, hook analyzer and visual engine creation code

@Overridepublic void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    this.setContentView(R.layout.activity_live_face_analyse);
    if (savedInstanceState != null) {
        this.lensType = savedInstanceState.getInt("lensType");
    }
    this.mPreview = this.findViewById(R.id.preview);
    this.createFaceAnalyzer();
    this.findViewById(R.id.facingSwitch).setOnClickListener(this);
    // Checking Camera Permissions
    if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
        this.createLensEngine();
    } else {
        this.requestCameraPermission();
    }}
    private void requestCameraPermission() {
    final String[] permissions = new String[]{Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE};

    if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
        ActivityCompat.requestPermissions(this, permissions, LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE);
        return;
    }}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
                                       @NonNull int[] grantResults) {
    if (requestCode != LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults);
        return;
    }
    if (grantResults.length != 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
        this.createLensEngine();
        return;
    }}

Conclusion

How about   is the development process particularly simple? You can develop a new feature in 30 minutes! Let's experience the effect of multi face smiling bag grabbing.

Single smiling face snapshot:


Photo of smiling faces of many people:

Based on the ability of face detection, what other functions can be done? Please open your brain hole! Here are some tips, such as:

  1. By identifying the location of facial features such as ears, eyes, nose, mouth and eyebrows, some interesting decorative effects are added.
  2. By recognizing the contour of the face, making some exaggerated deformation and stretching, generating some interesting and interesting portrait pictures, or developing the beauty function for the contour area.
  3. Through age identification, the children are addicted to the pain points of electronic products, and some parental control functions are developed.
  4. The characteristics of eye protection tips are developed by detecting the length of time that the eyes stare at the screen.
  5. Through random instructions (shaking head, blinking, mouth opening, etc.) to achieve the user action matching live detection function.
  6. Through the comprehensive use of the user's age, gender and other test results, to make recommendations for users;

Refer to the official website of Huawei developer Alliance for more detailed development guide
Huawei developer alliance machine learning service development guide

Content source: https://developer.huawei.com/consumer/cn/forum/topicview?tid=0201198419687680377&fid=18
Original author: AI_talking

Tags: Android Maven SDK Gradle

Posted on Tue, 26 May 2020 05:48:32 -0400 by qrt123