Collect server performance data during stress testing

Many companies in the data recording of all aspects of the work, almost entirely by manual completion, time-consuming and laborious, easy to make mistakes.

For example, in the warehouse operation management process, daily activities such as purchase, return, shipment, inventory, etc. are all done manually. Due to the filling of trivial and complicated forms and data, the workload is increased, so the work is easy to make mistakes and inefficiency.

Faced with this situation, many companies have asked to introduce a computer management system, but after the introduction of the computer system, it was found that only half of the problem was solved, because with the support of computer software, only the work of conditional placement of the computer can be solved. Occasionally, the manual transcription of the unconditional placement of the computer still cannot be resolved.

Even if the computer solves some manual transcription conditions, it cannot change the bottleneck caused by the large number of printed form data being re-entered at the next computer work point. If you use the PT923 or LK934 collector device, configure a set of effective workflow. , timely and accurate grasp of the situation of each commodity in each order. Bar code scanning registration of items with PT923 or LK934. You can also modify the item query.

At the same time, the item information is directly uploaded to the computing center through the MODEM. After the collector device is used, each link recorded in the data realizes automatic registration of data, thereby avoiding the problem of newly entering data.

Collect server performance data by combining python scripts with linux commands. Determine whether the data collection is over according to the current number of tcp links of the server during the test.

The script mainly has three operations. The first one is the preliminary collection of performance data. The data is written to the original file by calling the sar and iostat commands of linux. After the acquisition is completed, the performance indicator extraction script is executed, and the valid data is extracted from the original indicator file and written into the final file, and the packaging operation is performed.

Collect server performance data during stress testing

The code is only for me to meet the needs of the work, not very good, can meet the needs of the work, nothing more.

The configuration file that extracts data from the original file, according to the server language type:

abstractConf_ch.xml—Chinese

abstractConf_en.xml—English

The configuration file mainly indicates the original file path and uses the linux cat, egrep, and awk commands to extract data from the file as required.

"? Xml version='1.0' encoding='utf-8'? 》

"abstract"

Res_file name=“res/CPU”

"uniqflag" CPU "/uniqflag"

"object_file" result/cpu_status "/object_file"

"graphtitle" Cpu_Status "/graphtitle"

Linelabel %user %system "/linelabel"

"x_y_label" Time(s) Cpu_Percent(%)"/x_y_label"

"cmd"cat %s | egrep -v "Linux|^$|%s" | awk 'BEGIN {print "%s%s%s"}{if($2 !~/AM|PM/) print $3,$5 }' 》》 %s//cmd

/res_file

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

"/abstract"12345678910111213

Get the number of service connections

# coding:utf-8

#__author__ = 'Libiao'

Import subprocess

Class GetLinkingNumber(object):

Def __init__(self):

Pass

Def getLinkingNumber(serlf,servers):

Ret = []

If isinstance(servers,str):

Num = subprocess.Popen("netstat -tnap | grep tcp | grep %s | wc -l" %servers,stdout=subprocess.PIPE,shell=True).stdout

Ret.append(int(num.readline().strip()))

Elif isinstance(servers,dict):

For k,v in servers.items():

Num = subprocess.Popen("netstat -tnap | grep tcp | grep %s | wc -l" %v,stdout=subprocess.PIPE,shell=True).stdout

Ret.append(int(num.readline().strip()))

Else:

Pass

Return ret123456789101112131415161718192021

Need linux command to be executed by the main program

#! /bin/bash

Sar -n DEV 10 》"res/NetWork &

Iostat -x -d -k 10 》res/Disk &

Sar -r 10 》"res/Memory &

Sar -q 10 》res/System_load_average &

Sar -u 10 》"res/CPU &

Sar -b 10 》res/TPS &12345678

Data acquisition code main method

#-*- coding:utf-8 -*-

"""

Reated on October 16, 2015

@author: LiBiao

""

Import time, os

Import subprocess

Import multiprocessing

From write_log import writeLog

Import del_old_file

From record_test_data import Record_Data

From server_memory_collect import serverMemoryCollect

From get_linking_number import GetLinkingNumber

#Parameters that need to be set manually

SERVERS_D = {'1935':'srs-rtmp', '18080': 'srs-hls', '80': 'nginx'} # can enter srs or nginx or ATS

#Intervals

INTERVAL_TIME = 10

Class KPI_Collect(object):

Def __init__(self):

self.getLinkNum = GetLinkingNumber()

self.TCP_COUNT = self.getLinkNum.getLinkingNumber(SERVERS_D)

self.tcpRecord = Record_Data("res/linking_number")

Def getStr(self,alist):

Ret = ""

For s in alist:

Ret += str(s)

Ret += ' '

Return [ret.rstrip(' ')]

#Perform server performance data collection by calling collect.sh script

Def sys_kpi_collect(self):

Flag = '1'

Cmds = ['. /collect.sh']

Popen = subprocess.Popen(cmds[0],stdout=subprocess.PIPE,shell=True)

Pid = popen.pid

writeLog('INFO','》》》》》 Performance indicator collection process is in progress. . . .

Self.to_stop_subprocess(flag,popen)

# Stop the open handle of the program executed by sys_kpi_collect

Def to_stop_subprocess(self,flag,popen):

Curr_tcpnum = self.getLinkNum.getLinkingNumber(SERVERS_D)

self.tcpRecord.recordData(["srs&nginx Linking","%s %s %s" %tuple(SERVERS_D.values()), "Time(s) Numbers"])

self.tcpRecord.recordData(self.getStr(self.TCP_COUNT))

If flag is '1':

Loops = 0

While True:

If sum(curr_tcpnum) <<= sum(self.TCP_COUNT):

If loops == 15:

#15sThe number of current connections is less than the number of initial connections, exit the program

# Delete the sar and iostat processes that still exist in the system

Names = ['sar','iostat']

Cmd = “killall -9 %s %s” %tuple(names)

Subprocess.call(cmd,shell=True)

#terminate the child process

Popen.kill()

If subprocess.Popen.poll(popen) is not None:

Break

Else:

writeLog("INFO",r"》》》》 Waiting for the child process to terminate")

Else:

Loops += 5

Time.sleep(5)

Else:

Loops = 0

Time.sleep(INTERVAL_TIME)#waiting INTERVAL_TIME time

Curr_tcpnum = self.getLinkNum.getLinkingNumber(SERVERS_D)

self.tcpRecord.recordData(self.getStr(curr_tcpnum))

writeLog("INFO",r"》》》》” Performance indicator acquisition completed”)

Else:

While True:

If subprocess.Popen.poll(popen) is not None:

Break

Else:

writeLog("INFO",r"》》》》 Waiting for the child process to terminate")

writeLog("INFO",r"》》》》” Performance indicator acquisition completed”)

# Determine whether the sar and iostat processes remain in the system.

Def is_process_exists(self,name):

Cmd = “ps ax | grep %s | grep -v grep” %name

p = subprocess.Popen(cmd,stdout=subprocess.PIPE,shell=True)

P.wait()

If p.stdout.readline():

Return 1

Return 0

Def main_start(self):

Start_times = 0.0

timeRecord = Record_Data("res/timeConsum")

For server,num in zip(SERVERS_D.values(),self.TCP_COUNT):

writeLog("INFO",r"》》》 Initial %s service connection %d” %(server,num))

curr_tcpN = self.getLinkNum.getLinkingNumber(SERVERS_D)

Time.sleep(10)

While True:

If not sum(curr_tcpN) <<= sum(self.TCP_COUNT):

Start_times = time.time()

For server,num in zip(SERVERS_D.values(),curr_tcpN):

writeLog("INFO",r""》》》 The indicator collection task starts, the current %s connection number %d” %(server,num))

# Delete old kpi file

del_old_file.Del_Old_File("res/").del_old_file()

# separate thread to execute other services (srs, nginx, etc.) process memory metrics collection task

For port,server in SERVERS_D.items():

multiprocessing.Process(target=serverMemoryCollect,args=([port,server],INTERVAL_TIME,sum(self.TCP_COUNT),self.getLinkNum)).start()

#采集 server system kpi indicator

Self.sys_kpi_collect()

writeLog("INFO",r"》》》》》 Performance data collection is over!”)

Time_consum = time.time() - start_times

timeRecord.recordData(["%s" %str(time_consum)])

Break

Else:

Time.sleep(1)

curr_tcpN = self.getLinkNum.getLinkingNumber(SERVERS_D)

If __name__ == '__main__':

kpiCollect = KPI_Collect()

kpiCollect.main_start()

#-*- coding:utf-8 -*-

"""

Reated on October 16, 2015

@author: LiBiao

""

Import time

Import subprocess

From write_log import writeLog

From record_test_data import Record_Data

#Record the memory of server used

Def serverMemoryCollect(servers,intervaltime,tcpNum,getLinkObj):

getLinkNum = getLinkObj

memRecord = Record_Data("res/%s" %(servers[1]+":"+servers[0]))

Cmd = “ps -ef | grep %s | grep -v grep | awk \'{print $2}\'” %servers[1]

f = subprocess.Popen(cmd,stdout=subprocess.PIPE,shell=True)

writeLog("INFO","》》》》》 %s indicator collection process execution... "%servers[1])

Pids = [pid.strip() for pid in f.stdout]

Heard = [servers[1],'used','Linking_Number Memory_Capacity(MB)']

Try:

memRecord.recordData(heard)

curr_tcpN = sum(getLinkNum.getLinkingNumber(servers[0]))

Loops = 0

While True:

Vrss = []

For p in pids:

Cmd2 = "cat /proc/%s/status | grep VmRSS | awk \'{print $2}\'" %p

Rss = subprocess.Popen(cmd2,stdout=subprocess.PIPE,shell=True).stdout

Vrss.append(int(rss.readline().strip()))

memRecord.recordData(['%s' %str((sum(vrss)/1024))))

If curr_tcpN <<= tcpNum:

If loops == 15:

Within #15s, the current number of connections is less than the initial number of connections, the program exits

Break

Else:

Loops += 5

Time.sleep(5)

Else:

Loops = 0

Time.sleep(intervaltime)

curr_tcpN = sum(getLinkNum.getLinkingNumber(servers[0]))

writeLog("INFO",r"》》》》 %s process memory acquisition completed "%servers[1])

Except IOError as err:

writeLog("INFO","File error: " + str(err))

Return 0

Collect server performance data during stress testing

Extract valid data from raw data files and write to new files

# -*- coding: utf-8 -*-

'''

Created on September 14, 2015

@author: LiBiao

'''

Import os,time

Import subprocess

Import getCmds

Import del_old_file

From write_log import writeLog

#Requires manually configured data

#SERVER_NAME = ['srs_2.0.0.','nginx']#'nginx' # can enter nginx or srs

SERVERS_D = {'1935': 'srs-rtmp', '18080': 'srs-hls', '80': 'nginx'}

#系统语言编码

LANG = "en_US.UTF-8"

#Get the language currently used by the system

Def getSysLANG():

Popen = subprocess.Popen('echo $LANG',stdout=subprocess.PIPE,shell=True)

Return popen.stdout.read().strip()

# Get the corresponding configuration file path according to the system language code

Def getConfPath():

If getSysLANG() == LANG:

Return "./conf/abstractConf_en.xml"

Return "./conf/abstractConf_ch.xml"

Class AbstractKPI(object):

Def __init__(self, *args):

(self.cmds,) = args

Def abstract_kpi(self):

For cmd in self.cmds:

# print cmd

subprocess.Popen(cmd,stdout=subprocess.PIPE,shell=True)

#Get the local ip address, used to generate data different from other machines

Def get_local_ip():

Try:

Ip = os.popen("ifconfig | grep 'inet addr' | awk '{print $2}'").read()

Ip = ip[ip.find(':') + 1:ip.find('')]

Except Exception,e:

Print e

Return ip

#Package the final collected data

Def to_tar():

Ip = get_local_ip()

Times = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())

Subprocess.call("cp res/linking_number res/timeConsum" +"res/%s"*len(SERVERS_D.items()) %tuple([v + "\:" + k for k,v in SERVERS_D.items( )]) + "result/", shell=True)

Files = ["result/" + filename for filename in os.listdir("result/")]

Cmd = 'tar -cf SYS_KPI_'+ ip + "_" + times + '.tar' + ' %s'*len(files) %tuple(files)

Try:

Subprocess.call(cmd,shell=True)

Except Exception as err:

writeLog("ERROR",r"》》》》” File compression error %s” %str(err))

Exit()

writeLog("INFO",r"》》》》” The indicator file is packaged and completed”)

#Script main entry function

Def main_start():

# Delete old kpi file

del_old_file.Del_Old_File("result/").del_old_file()

#Get to the configuration file path

Confpath = getConfPath()

#call getCmds to get the command to parse the kpi file

Cmds = getCmds.Get_Cmds(confpath).getcmds()

#Extract useful data from the original indicator file

AbstractKPI(cmds).abstract_kpi()

#Pack up the parsed kpi file in the result directory

To_tar()

writeLog("INFO",r"》》》》” Index data extraction and packaging completed”)

If __name__ == '__main__':

Main_start()

The command to collect data in the script is linux. In fact, this is not the most suitable way to deal with it. It was only used to meet the needs of the work. Some methods in Python third-party module psutil are currently being used to perform server performance data collection, so that the script will be more in line with the Python development model.

1D Scanner

The utility model provides a scanner capable of scanning one-dimensional and two-dimensional barcodes, comprising: a casing and a scanning device, a start button is arranged on the casing, the scanning device is provided with a circuit board, and the circuit board carries a microcomputer A control processing module, an image capture module, and a laser light module; and the key point of the present invention is that the laser light module can emit a linear laser light and project it on a barcode, and the linear laser light is used as an indicator light (pointer). ), but when the scanning device wants to read the barcode, immediately turn off the linear laser light so as not to cause interference and affect the reading efficiency of the scanner.

1D Scanner ,1D Wired Barcode Scanner,1D 2D Barcode Scanner,1D Bar Code Scanner

ShengXiaoBang(GZ) Material Union Technology Co.Ltd , https://www.sxbgz.com