Automatic test -- learning records in python debugging script process

This paper mainly records the learning and recording of some Python libraries or usage in the process of automatic script debugging.
Note: This debugging is based on Python 2.x, so the following functions are based on 2.x.
The first record is the re library

re Library

							//Some knowledge about re database
import re, string, flags=0) ,Scan the second parameter string, return if the match fails none
re.match() Always match from the beginning of the string and return the match Object, matching failed return none
//The returned object can call the group method to get the string
re.findall(pattern, string, flags=0)  Returns a list of all matching characters
//Some knowledge of regular expression:
re.findall(r"ss",str,0)   r The identity is followed by a regular expression
re.findall(r"^ss","ssddd",0)  ^Indicates match to ss Starting string, returned as a matching string
re.findall(r"html$","")  $Symbols are represented by html Ending string return
re.findall(r"[t,w]h","")   [...]Match one of the characters in parentheses
re.findall(r\d","") "d"It's a regular syntax rule to match0reach9Number between return to list
re.findall(r"\w","")  "w"On behalf of matching in regular a reach z,Capitalization A reach Z,number0reach9
a = "123abc456"
print"([0-9]*)([a-z]*)([0-9]*)",a).group(0)   #123abc456, return to the whole
print"([0-9]*)([a-z]*)([0-9]*)",a).group(1)   #123
print"([0-9]*)([a-z]*)([0-9]*)",a).group(2)   #abc
print"([0-9]*)([a-z]*)([0-9]*)",a).group(3)   #456


	About the json Library
	The json.dumps() function encodes a list of Python data types in json format (it can be understood that the json.dumps() function converts a dictionary into a string)
	The json.loads() function converts json format data to a dictionary


urllib2: mainly for htpp/https requests, modify the header.
urllib: mainly for the transformation of the requested data.
cookie Ib: used in conjunction with urllib2 to store session cookies

About abstract/Basic certification related code
# -*- coding: utf-8 -*-

import httplib,urllib,urllib2
import base64
#Define url
#Instantiation summary authentication
#If basic authentication is required, change to auth=urllib2.HTTPBasicAuthHandler()
##Add the domain name, request URL, user name and password requested by the first parameter
auth.add_password("Requested domain name", url, "User name", "Password")
opener = urllib2.build_opener(auth)
res_data =  #   This method can also be replaced by urllib2.urlopen(url),
# res_data = urllib2.Request(url)
# rsp=urllib2.urlopen(res_data)
 //Instead of res_data =, you can also
# res_data = urllib2.Request(url)
# rsp=urllib2.urlopen(res_data)
#The return value. read() converts the instance object to a string
res =
print res
#How to obtain domain name in digest authentication?
#Using httplib library, the code is as follows
# -*- coding: utf-8 -*-

import httplib,urllib,urllib2
import base64
def http_get_digest_realm(url="",hostname="",port=""):
    iHttpPort = int(port)
    #Create an HTTP type request connection and return an HTTP connect object hostname parameter without http,
    httpClient = httplib.HTTPConnection(hostname, iHttpPort)
    #First parameter request method URL second request web page path third body request body fourth request header, no return value is equivalent to sending data to the server
    httpClient.request('GET', url, '', {})
    #Last requested value
    response = httpClient.getresponse()
    #Return value converted to string
    msg =  str(response.msg)
    print msg
    #Get the realm value
    realm = msg.split('realm="')[1].split('"')[0]
    return realm
print http_get_digest_realm("http://xx/xx/x/,"IP","80")
urlopeny Prototype, it can be seen that it can be transmitted data
//It is used to open a URL. The URL can be a string or a request object. Data is used to specify the string of additional data to be sent to the server. Timeout is used to set the timeout for opening the URL  ,This method is not supported cookie
def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
            cafile=None, capath=None, cadefault=False, context=None):
    global _opener
    if cafile or capath or cadefault:
        if context is not None:
            raise ValueError(
                "You can't pass both context and any of cafile, capath, and "
        if not _have_ssl:
            raise ValueError('SSL support not available')
        context = ssl.create_default_context(purpose=ssl.Purpose.SERVER_AUTH,
        https_handler = HTTPSHandler(context=context)
        opener = build_opener(https_handler)
    elif context:
        https_handler = HTTPSHandler(context=context)
        opener = build_opener(https_handler)
    elif _opener is None:
        _opener = opener = build_opener()
        opener = _opener
    return, data, timeout)
Use cookie The steps to access the example are as follows:
import httplib,urllib,urllib2
import cookielib
cookie= cookielib.LWPCookieJar()#How to create a cookie to store the object Python 2. X
handler=urllib2.HTTPCookieProcessor(cookie)#take cookie Incoming to HTTPcookie In the processor object, the method returns cookie head  cookie=cookielib.CookieJar()#Methods used to create the cookie storage object Python 3
opener=urllib2.build_opener(handler)#Add cookie settings to HTTP constructor to return object value
#The usage of and urlibe. Urlopen() are the same. Both can open objects or URL s
#The returned object can increase the header
opener.addheaders = [('User-agent', 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)')]
opener.addheaders = [('Accept', 'application/json, text/plain, */*')]
opener.addheaders = [('Content-Type', 'application/json;charset=UTF-8')]
urllib2.install_opener(opener)#Set the header or install it in the py file that calls urllib2. If you only write the request, you can write it in the initialization method (start the constructor)
#You can then write it in three ways
# res_data = urllib2.Request(url)
# rsp=urllib2.urlopen(res_data)
#res =
# res_data = urllib2.Request(url)
#You can also modify the header res_data.add_header ("connection": "keep live")
# rsp=urllib2.urlopen(res_data)
#res =
#res =

#	Urllib2.urlopen (URL, data = none, timeout = < object object >): used to open a URL. The URL can be a string or a request object. Data is used to specify the string of additional data to be sent to the server. Timeout is used to set the timeout of opening the URL. This method does not support cookie s
	#urllib2.Request(url, data, headers): used to construct a request object, and then use urllib2.urlopen() to open the request object. Data is used to specify the string of additional data to be sent to the server
	#The object returned by build ﹣ opener() has the open() method, which has the same function as the urlopen() function
**about Requset Partial primitive function**
//You can see that this function supports three methods: POST, PUT and GET
class Request(url_request.Request):
    Extends the url_request.Request to support all HTTP request types.

    def __init__(self, url, data=None, method=None):
        Initialise a new HTTP request.

        - url - String for the URL to send the request to.
        - data - Data to send with the request.
        if method is None:
            method = data is not None and 'POST' or 'GET'
        elif method != 'POST' and method != 'PUT':
            data = None
        self._method = method
        url_request.Request.__init__(self, url, data=data)

    def get_method(self):
        Returns the HTTP method used by this request.
        return self._method

Request Library

The request library is certified as follows:

# -*- coding: utf-8 -*-
Basic authentication
import requests
from requests.auth import HTTPDigestAuth

respones = requests.get(url, headers=header, auth=(username, password))
Digest authentication
respones = requests.get(url, headers=header, HTTPDigestAuth=(username, password))
Pass parameters:
respones =, data=data,headers=self.header, auth=HTTPDigestAuth(username, password))
session use:
During the interface test, we will call multiple interfaces to send out multiple requests, in which we sometimes need to keep some common data, such as cookies.

1. The session object of the requests library can help us keep some parameters across requests, and also keep cookies between all requests made by the same session instance.
s = requests.session()
# req_param = '{"belongId": "300001312","userName": "alitestss003","password":"pxkj88","captcha":"pxpx","captchaKey":"59675w1v8kdbpxv"}'
# res ='', json=json.loads(req_param))
# # res1 = s.get("")
#print(res.cookies.values()) gets all session s logged in

2. The session object of the requests library can also provide us with the default data of the request method, which is realized by setting the properties of the session object
#Create a session object  
s = requests.Session()  
#Set the auth property of the session object as the default parameter of the request  
s.auth = ('user', 'pass')  
#Set the headers property of the session, and use the update method to merge the headers property of the other request methods as the headers of the final request method  
s.headers.update({'x-test': 'true'})  
#Send the request. If auth is not set here, the auth property of the session object will be used by default. The headers property here will be merged with the headers property of the session object  
r = s.get('', headers={'x-test2': 'true'})  
#s.cookie=cookie can be set
 The above request data is equal to: {'Authorization': 'Basic dXNlcjpwYXNz', 'x-test': 'false'}
#View request headers for sending requests  
r.request.headers print all header data requested in the response
res3 = s.get("",cookies = cookie)

A brief introduction to cookie s and session s:
(1) The cookie is generated by the server, stored in the response header and returned to the client, which will store the cookie. Then when the client sends the request, the user agent will automatically obtain the locally stored cookie, store the cookie information in the request header and send it to the server. The expiration time of cookies can be set at will. If they are not actively cleared, they can be kept for a long time, even if the computer is shut down.

(2) Session is generated by the server and stored in the memory, cache, database and other places of the server. After the client sends a request to the server, the server will generate a session according to the request information, and generate a session ID, which will be returned to the client through a cookie, so that each request in the future can identify who you are. When the client sends a request to the server again, the session ID will be sent to the server through a cookie

Because Seesion is rarely used: this blogger provides ideas

Published 17 original articles, won praise 1, visited 381
Private letter follow

Tags: Session JSON Python SSL

Posted on Sat, 11 Jan 2020 03:57:21 -0500 by rscott7706