Asset monitoring and information collection expansion

Differences between POC, EXP, Payload and Shellcode( Original link portal)

1. Concept

POC: full name 'Proof of Concept', Chinese 'Proof of Concept', often refers to a piece of code for vulnerability proof.
EXP: full name 'Exploit', Chinese 'Exploit', refers to the action of exploiting system vulnerabilities.
Payload: Chinese 'payload', refers to the code or instruction that is actually executed in the target system after successful exploit.
Shellcode: a simple translation of 'shell code', which is a kind of Payload, is named because it establishes a forward / reverse shell.

2. Some notes

POC is used to prove the existence of vulnerabilities and exp is used to exploit vulnerabilities. The two are usually not the same. In other words, POC is usually harmless and exp is usually harmful. With POC, there is exp.
There are many kinds of payloads, which can be Shellcode or a section of system command directly. The same Payload can be used for multiple vulnerabilities, but each vulnerability has its own EXP, that is, there is no general EXP.
There are many kinds of Shellcode, including forward, reverse, and even meterpreter.
Shellcode is not the same as shellshok. Shellshock specifically refers to the Shellshock vulnerability discovered in 14 years.

3.payload module

Among the six Metasploit framework modules, there is a Payload module. Under this module, there are three types: single, stager and stages. Single is an all in one Payload that does not depend on other files, so its volume will be relatively large. Stager is mainly used to transfer a smaller stager to establish a connection when the memory of the target computer is limited. Stages refers to the use of The connection established by stager downloads the subsequent Payload. Stager and stages have many types, which are suitable for different scenarios.

4. Summary

Imagine yourself as an agent. Your goal is to monitor an important person. One day you suspect that the window of the target's home may not be closed, so you push forward and open it. This is a POC. Then you go back and start preparing the infiltration plan for the next day. The next day you infiltrate into his home through the same loophole, carefully check all important documents and leave A hidden eavesdropping (qi) is also installed when opening. What you do on this day is an EXP, and what you do in his house is a different Payload. Just treat eavesdropping (qi) as a Shellcode!

Github monitoring

It is convenient to collect and sort out the latest exp or poc and find the assets of relevant test targets

Query of various sub domain names

DNS, filing, certificate

Global node request cdn

Enumerating or resolving subdomain names
Facilitate the discovery of administrator related registration information

Dark engine related search

fofa,shodan,zoomeye

WeChat official account interface acquisition

Internal group internal application internal interface

Demonstration case

Monitor the latest EXP release and others

#Title: wechat push CVE-2020
#Date: 2020-5-9
#Exploit Author: weixiao9188
#Version: 4.0
#Tested on: Linux,windows
#cd /root/sh/git/ && nohup python3 /root/sh/git/git.py &
#coding:UTF-8
import requests
import json
import time
import os
import pandas as pd
time_sleep = 60 #Crawl every 20 seconds
while(True):
headers1 = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3741.400 QQBrowser/10.5.3863.400"}
#Determine whether the file exists
datas = []
response1=None
response2=None
if os.path.exists("olddata.csv"):
#If the file exists, crawl 10 at a time
df = pd.read_csv("olddata.csv", header=None)
datas = df.where(df.notnull(),None).values.tolist()#Convert nan in the extracted data to None
requests.packages.urllib3.disable_warnings()
response1 = requests.get(url="https://api.github.com/search/repositories?q=CVE2020&sort=updated&per_page=10",headers=headers1,verify=False)
response2 =
requests.get(url="https://api.github.com/search/repositories?q=RCE&ssort=updated&per_page=10",hea
ders=headers1,verify=False)
else:
#There is no crawling all
datas = []
requests.packages.urllib3.disable_warnings()
response1 = requests.get(url="https://api.github.com/search/repositories?q=CVE2020&sort=updated&order=desc",headers=headers1,verify=False)
response2 =
requests.get(url="https://api.github.com/search/repositories?q=RCE&ssort=updated&order=desc",heade
rs=headers1,verify=False)
data1 = json.loads(response1.text)
data2 = json.loads(response2.text)
for j in [data1["items"],data2["items"]]:
for i in j:
s = {"name":i['name'],"html":i['html_url'],"description":i['description']}
s1 =[i['name'],i['html_url'],i['description']]
if s1 not in datas:
#print(s1)

Posted on Mon, 01 Nov 2021 03:45:52 -0400 by cmburns69