Have you learned the necessary framework knowledge for interface automation testing?

In this chapter, we will learn the @ Test annotation and the use of various parameters in detail.

We have created several cases, and @ Test annotation is added to the Test method of each case to identify that the current method is a Test method, and the method with @ Test annotation is the simplest TestNg Test method. Now let's write a basic Test:


public void test(){



After running, you can see the output of the result we want – > test

Let's enter the Test annotation class to see how the annotation is defined:


@Target({ElementType.METHOD, ElementType.TYPE, ElementType.CONSTRUCTOR})

public @interface Test {

  String[] groups() default {};

  boolean enabled() default true;

  /** @deprecated */


  String[] parameters() default {};

  String[] dependsOnGroups() default {};

  String[] dependsOnMethods() default {};

  long timeOut() default 0L;

  long invocationTimeOut() default 0L;

  int invocationCount() default 1;

  int threadPoolSize() default 0;

  int successPercentage() default 100;

  String dataProvider() default "";

  Class<?> dataProviderClass() default Object.class;

  boolean alwaysRun() default false;

  String description() default "";

  Class[] expectedExceptions() default {};

  String expectedExceptionsMessageRegExp() default ".*";

  String suiteName() default "";

  String testName() default "";

  /** @deprecated */

  boolean sequential() default false;

  boolean singleThreaded() default false;

  Class retryAnalyzer() default Class.class;

  boolean skipFailedInvocations() default false;

  boolean ignoreMissingDependencies() default false;

  int priority() default 0;


You can see that the definition @ Target({ElementType.METHOD, ElementType.TYPE, ElementType.CONSTRUCTOR}) of the Test annotation represents the scope that this annotation can define, that is, it can be used on construction methods, common methods and classes. This annotation defines a large number of parameters and methods. Let's see what these parameters do:


Groups represents groups, that is, the same function or a continuous operation method can be defined as a group, and the runtime can run completely according to the group


enabled indicates whether to enable the current method. The default value is true, that is, to enable the current test method


Parameters represents parameters, which can be passed to the test method using the current annotation


dependsOnGroups represents the dependent group, that is, if some dependent methods must be executed before the current method runs, we can set this part of the methods as a group, and this group can be set as a dependent group. When the test runs, the dependent group will be run first, and then the current test method will be run


dependsOnMethods represents the collection of dependent methods, that is, if the current method needs to rely on a method to complete execution or transfer results before running, the dependent methods can be set in, and the test run will run according to the dependency transfer priority


timeOut represents the running timeOut of the test method. You can set the corresponding time to test whether the current method can complete the execution correctly within the specified time. The unit is milliseconds


invocationTimeOut, like the previous parameter, sets the timeout of the method, but the difference is that this parameter sets the timeout of the calling method, that is, when another method calls the current method, it must return within the specified time, otherwise it will be regarded as the call timeout


invocationCount represents the number of times the current method is allowed to be called. This parameter can specify the number of times the current test method is called. By default, the value is 1, which means that the current method will only be called once in a run


threadPoolSize represents how many threads are opened to run the current test method. This parameter can specify the number of threads in the thread pool to simulate performance test and concurrent test. The default is 0, that is, the main thread is used instead of opening a separate thread


successPercentage represents the success percentage of the current test method. Generally, some tests may not succeed due to the impact of the network or performance during our test. At this time, we can specify this parameter to limit the success percentage of the test


dataProvider is the method name that specifies a particular content provider


dataProviderClass specifies the class name of the content provider


alwaysRun refers to whether the current method will run under any circumstances. If it is specified as true, it means that the method will still try to run even if the method or group it depends on fails to run. The default is false


Description represents the description of the current test method


expectedExceptions means that the current test method may throw some exceptions. You can use the current parameter to specify specific exceptions and exclude these exceptions. If the excluded exceptions appear, the current test method will still run successfully


expectedExceptionsMessageRegExp means that by setting this parameter, it can be used to match whether the messages of exceptions in the test method are consistent


suiteName refers to the name of the suite to which the current test method belongs when running


testName refers to the name of the test case specified when the current test method is running


sequential means that if the current parameter is true, all test methods of the current test class will be executed in the defined order


If SingleThread is set to true, all methods on this test class are guaranteed to run in the same thread, even if the test is currently running with parameter = "methods". This property can only be used at the class level. If it is used at the method level, it will be ignored. Note: this property was once called order (now deprecated)


retryAnalyzer refers to the test retry mechanism, that is, if the current test method fails, you can specify this parameter. When it fails, it will retry a certain number of times according to the specified value


skipFailedInvocations refers to whether to skip the failed method and continue running when the method fails. The default value is false


ignoreMissingDependencies refers to whether to continue execution when the specified dependency cannot be found. The default value is false


The priority parameter specifies the priority of the current test method. The lower priority will be run first. The lowest priority is 0. The default priority is 0

Next, let's take a look at the use of common annotation parameters through some examples

How to report exceptions in a test

In early development, the traditional way to report errors is to use the return code. For example, returning - 1 means running failure. However, this method will have some problems. For example, when the caller gets the return value, it needs a large number of if branches to judge whether the current success or failure is successful, and there may not be an appropriate error code corresponding to it every time, It often leads to the mismatch between the error code and the error. Therefore, in view of this situation, there was a way to obtain specific error information through exception information. This way solves the disadvantages of the previous error code, but what if you deal with test exceptions gracefully in the process of java testing? Suppose there is a demand at this time: book a flight. If the first attempt fails, an exception will be thrown. The processing method in junit3 is:


public void shouldThrowIfPlaneIsFull() {

Plane plane = createPlane();


try {

plane.bookPlane(createValidItinerary ( ), null);

fail(MThe reservation should have failed");


catch(ReservationException ex) {

//success, do nothing: the test will pass



Try catch is our most common processing method. What if we need to test throwing exceptions as a condition for the success of this test case? Do you rely on try catch every time? There are more elegant processing methods in testNG:

@Test(expectedExceptions = ReservationException.class)

public void shouldThrowIfPlanelsFull() {

     plane plane = createPlane();

     plane. bookAHSeats ();

     plane.bookPlane(createValidItinerary(), null);


@Set an expectedExceptions parameter in the Test annotation to mark the exceptions expected to be thrown. If this exception occurs during the run, it will be regarded as the success of our current Test case, which is much more elegant than before. What if all the exceptions I return at this time are runtimeexceptions, but I want to determine whether it is an exception scenario I want to trigger according to msg? At this time, the expectedExceptionsMessageRegExp parameter is required. Just set the fixed return exception information or the corresponding regular expression. After the last returned exception type is matched, the regular matching of the exception information will be carried out. Only the matched exceptions will be regarded as successful.

Multithreading and concurrent running test

The early development mainly used single thread and relied on the hardware performance of the machine to improve the user experience. However, later, multi-core computing power became the mainstream, so almost all programs supported multi-threaded operation. The same java programs often performed well under single thread test and did not have problems, but when there were too many users, they often found unknown problems, How can we simulate test case scenarios under multithreading? Don't worry. testNg adds concurrency module support to prove whether it is thread safe in some scenarios. Let's start with a classic single example:

public class Singleton {
     private static Singleton instance = null;
          public static Singleton getlnstance() {
          if (instance == null) {
          instance = new Singleton();
         return instance;

This is a classic single instance writing method. It seems that it can ensure that the Singleton class instance is instantiated only once, but is this really the case? Let's simulate the concurrent test of multithreading:

private Singleton singleton;


public void init() {

       singleton = new Singleton();


@Test(invocationCount = 100, threadPoolSize = 10)

public void testSingleton(){



      Singleton p = singleton.getlnstance();


You can see that we have set the invocationCount parameter on the @ Test annotation, indicating that this method has been run 100 times and is set

The threadPoolSize parameter means that 10 threads are started to run this method at the same time (no matter how many threads, the total number of runs is 100). Let's see the results:


Concurrent testing

Total tests run: 100, Failures: 5, Skips: 0


It can be seen that five of our assertions failed. It can be seen that this single example is indeed unsafe under multithreading

Stability test and reliability test

In the process of testing, we often encounter requirements. For example, the calling time of some interfaces is unstable. We need to test the specific stability or the reliability of the current interface. At this time, we need the timeOut and successPercentage parameters, For example, we have an interface. We must ensure that the interface is called 100 times within 10s, and the success rate of this interface is more than 98%. Otherwise, this interface is unqualified. The test code can be written as follows:

//Test whether the success rate of 100 method calls in 10s exceeds 98%

@Test(timeOut = 10000, invocationCount = 1000,

successPercentage = 98)

public void waitForAnswer() {

     while (1 success) {




Finally, I also sorted out some learning materials for software testing, which should be very helpful for children learning software testing. In order to better sort out each module, I also referred to many high-quality online blogs and projects, trying not to miss every knowledge point. Many friends reviewed these contents and got offer s from big manufacturers such as BATJ, This information has also helped many software testing learners. I hope it can also help you. You need to enter the group 644956177 to get it yourself. Software testing, go with you! Accompany you to become an excellent test engineer!

Now that you see here, please do me a favor:

1. Praise, let more partners see;

2. Follow me and continuously update the test dry goods.

It's not easy to type. If this article is helpful to you, point a praise, collect a collection and give the author an encouragement. It is also convenient for you to find it quickly next time.

Dry goods sharing

North drift for 4 years, do development 13K, switch to automatic testing, is it really reliable···

From a cute new Xiaobai to winning five test offer s at a time, my test growth path···

After 95, I just worked for 2-3 years with an annual salary of 50W +, and I found that it was never age that beat us···

After six years of automated testing, I finally had my own team···

Double non flow undergraduate graduation, self-study software testing can also successfully enter Alibaba···

Tags: Programming Programmer software testing IT

Posted on Thu, 02 Sep 2021 20:52:02 -0400 by dustinnoe