Alicloud AI training camp_ Day02_ Character recognition_ Identification of ID card

Project introduction Recently, I participated...
Project introduction
Document address used in the project
Start project (configuration refers to configuration in Demo)
About using the configuration in the document

Project introduction

Recently, I participated in Alibaba cloud's AI training camp and completed a Web application of "ID card identification" as required. I hereby record it.

Since I have used Baidu AI's face recognition SDK before, and I am familiar with Alibaba cloud, this blog focuses on the understanding of official documents, as well as the understanding and application of the given video and project Demo.

Document address used in the project

Alibaba cloud Da Mo yuan visual open platform: https://vision.aliyun.com/

Alibaba cloud vision open platform document address: https://help.aliyun.com/product/142958.html?spm=a2c4g.11186623.6.540.263e3c74d59JVh

Aliyun Java SDK COCR warehouse address: https://mvnrepository.com/artifact/com.aliyun/aliyun-java-sdk-core

New version character recognition Ocr20191230 warehouse address: https://mvnrepository.com/artifact/com.aliyun/ocr20191230

The GitHub project addresses of Alibaba cloud's two demos: https://github.com/aliyun/alibabacloud-viapi-demo

The code in this article refers to these two demos

explain

After many attempts and careful reading, it is found that the introduction and dependency in the official Demo are slightly different from those in the official document. There are also some differences between the configuration of Config class in Demo and that in the document.

I have tested two kinds of dependencies and configurations. I think there is a problem with the configuration in the official document, which will lead to an error. So I chose the configuration in Demo. (detailed later)

Here, only the local image upload identification is shown, and Alibaba cloud OOS object storage service is not used; some code logic about the front end is not given, only the code of the key part of ID card identification is given.

Start project (configuration refers to configuration in Demo)

Open service

According to the instructions in the document, the service can be opened successfully without any accident, as shown in the following figure:

Using Maven to import SDK related dependencies

The document says that two versions of SDK are provided. The old version requires Alibaba cloud OSS to store our pictures. Because I use the image storage of qiniu cloud instead of Alibaba cloud, I choose a new version of SDK to support local picture upload.

Here we choose to use the dependency in the official Demo instead of the dependency in the official document

My import dependencies

<dependency> <groupId>com.aliyun</groupId> <artifactId>ocr</artifactId> <version>1.0.3</version> </dependency>

Create Config class and Client class

Refer to the instructions in the document, we need to create a Config class first, save our accessKeyId, accessKeySecret and some other configurations in the Config class, and then create a Client class for the parameter through the Config class.

Note: both the Config class and the Client class are located in the com.aliyun.ocr Pay attention to the classes below the package when you import the package.

For the pit encountered, there is a Bug in the configuration in the document, as follows:

After improvement, the configuration and initialization of Config class should be as follows (as in the official Demo)

private Client client; private RuntimeOptions runtime; @Value("$") private String accessKeyId; @Value("$") private String accessKeySecret; @PostConstruct private void init() throws Exception { Config config = new Config(); config.type = "access_key"; config.regionId = "cn-shanghai"; config.accessKeyId = accessKeyId; config.accessKeySecret = accessKeySecret; config.endpoint = "ocr.cn-shanghai.aliyuncs.com"; ocrClient = new Client(config); // Note that here we create a RuntimeOptions object, which will be used later runtime = new RuntimeOptions(); }

Continue to read the document later

Method calling Client class

(1) Call process overview

An example of bank card recognition is given in the document. To be clear, it is to create an xxxRequest object, then obtain the number stream of local pictures and assign it to the imageURLObject property of the xxxRequest object, and put some other configurations into the xxxRequest object.

After that, we can get an xxxResponse object by calling the xxxAdvance() method of the Client object; then we just need to get the JSON string from this xxxResponse object.

This xxx represents different scenarios, that is, different scenarios use different classes and methods, but the names are similar. The parameters of the specific scene will be slightly different. Please refer to the detailed description of the scene in the document.

(2) Specific use

Take ID card recognition as an example to see how it is described in the following documents:

In fact, you can also debug here to quickly experience ID card recognition:

Let's look at the relevant classes in IDEA

In fact, the classes corresponding to these scenarios can be seen in the jar package imported after adding Maven dependency:

OK, after reading the document, I will replace the code in the document with the class and method of ID card identification scene. The effect is as follows

public String MyRecognizeIdCard(String filePath, String side) throws Exception { // Note the names of our classes and methods RecognizeIdentityCardAdvanceRequest req = new RecognizeIdentityCardAdvanceRequest(); InputStream inputStream = new FileInputStream(new File(filePath)); req.imageURLObject = Files.newInputStream(Paths.get(filePath)); req.side = side; RecognizeIdentityCardResponse rep = client.recognizeIdentityCardAdvance(req, runtime); // This face is not generated out of nothing. It is the value given in the document after we read it if ("face".equals(side)) { // Positive recognition, through Alibaba's fastjson, converts the fromResult class to a string in JSON format and returns return JSON.toJSONString(rep.data.frontResult); } else { // Negative recognition return JSON.toJSONString(rep.data.backResult); } }

Then we test in the test class

@Autowired private OcrService ocrService; @Test void contextLoads() { try { String face = ocrService.MyRecognizeIdCard("D:\\Temp\\images\\_2020053011373037SS.png", "face"); System.out.println("face = " + face); String backface = ocrService.MyRecognizeIdCard("D:\\Temp\\images\\IMG20200530112702.jpg", "back"); System.out.println("back = " + backface); } catch (TeaException e) { System.out.println(e.getData()); } catch (Exception e) { e.printStackTrace(); } }

Because my picture is mosaic, only part of the results can be recognized, as shown below

About using the configuration in the document

I follow the document step by step, the same code as the document, and the same dependency. I will report an error somehow. I don't know if it's my own reason or if the document is wrong. Anyway, my code just can't run. Here's my error reporting code. I hope that some small partners can find out the problem and solve it.

Initialization part ----- > the key is not to config.endpointType= "internal"; commented out

Maven dependency I introduced

<!-- https://mvnrepository.com/artifact/com.aliyun/aliyun-java-sdk-core --> <dependency> <groupId>com.aliyun</groupId> <artifactId>aliyun-java-sdk-core</artifactId> <version>4.5.1</version> </dependency> <!-- https://mvnrepository.com/artifact/com.aliyun/ocr20191230 --> <dependency> <groupId>com.aliyun</groupId> <artifactId>ocr20191230</artifactId> <version>0.0.3</version> </dependency>

report errors

5 June 2020, 06:26 | Views: 5604

Add new comment

For adding a comment, please log in
or create account

0 comments