首页 > 代码库 > Python实现简单的http服务器程序
Python实现简单的http服务器程序
主要的需求是这样的,需要调用服务器端的程序获取GPU服务器的信息并且返回给前端显示出来,那么就需要完成一个服务器端的程序,获取服务器的数据之后将数据返回(以JSON格式)。
效果如下图:
页面没有内容是因为服务程序还没有启动。下面完成服务器程序:
#!/usr/bin/python from bottle import route,run,template import os from bottle import get,post,request import re gpu_info_dict = {} @route('/') @route('/xiandao') def func(): os.system("./deviceQuery > gpu_info") os.system(" nvidia-smi >> gpu_info") fs = open('gpu_info','r') i = 1 for line in fs.readlines(): a = line.strip().split(":") if i == 7: gpu_info_dict['device'] = a[-1].strip() elif i == 9: gpu_info_dict['cuda_version_number'] = '"'+a[-1].strip()+'"' elif i == 10: gpu_info_dict['global_memory'] = '"'+a[-1].strip()+'"' elif i == 11: gpu_info_dict['total_cores'] = '"'+a[-1].strip()+'"' elif i == 12: gpu_info_dict['gpu_clock_rate'] = '"'+a[-1].strip()+'"' elif i == 13: gpu_info_dict['mem_clock_rate'] = '"'+a[-1].strip()+'"' elif i == 14: gpu_info_dict['mem_bus_width'] = '"'+a[-1].strip()+'"' elif i == 19: gpu_info_dict['constant mem'] = '"'+a[-1].strip()+'"' elif i == 20: gpu_info_dict['shared_mem'] = '"'+a[-1].strip()+'"' elif i == 21: gpu_info_dict['registers_available'] = '"'+a[-1].strip()+'"' elif i == 50: l1 = line[19:32].strip().split("/") gpu_info_dict['power_used'] = '"'+l1[0].strip()+'"' gpu_info_dict['power_capacity'] = '"'+l1[1].strip()+'"' l2 = line[34:55].strip().split("/") gpu_info_dict['mem_used'] = '"'+l2[0].strip()+'"' gpu_info_dict['mem_capacity'] = '"'+l2[1].strip()+'"' i += 1 #生成json格式的字符串并返回 json = "{" for i in gpu_info_dict: json += '"'+i+'"'+":"+gpu_info_dict[i]+"," json += "}" return json run(host='172.16.1.20',port=8088,debug=True)
1)bottle是一个应用于小网页应用的快速简单的框架(http://yunpan.cn/cytIgzQXPjeaS (提取码:8e71))。
2)13/14行是调用程序和命令,将获取gpu的信息并重定向到文件gpu_info中去。生成如下文件:
./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "Tesla K40m" CUDA Driver Version / Runtime Version 5.5 / 5.5 CUDA Capability Major/Minor version number: 3.5 Total amount of global memory: 11520 MBytes (12079136768 bytes) (15) Multiprocessors, (192) CUDA Cores/MP: 2880 CUDA Cores GPU Clock rate: 876 MHz (0.88 GHz) Memory Clock rate: 3004 Mhz Memory Bus Width: 384-bit L2 Cache Size: 1572864 bytes Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096) Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: No Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Enabled Device supports Unified Addressing (UVA): Yes Device PCI Bus ID / PCI location ID: 3 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 5.5, CUDA Runtime Version = 5.5, NumDevs = 1, Device0 = Tesla K40m Result = PASS Tue Jan 13 21:18:02 2015 +------------------------------------------------------+ | NVIDIA-SMI 5.319.37 Driver Version: 319.37 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K40m Off | 0000:03:00.0 Off | 0 | | N/A 27C P0 61W / 235W | 69MB / 11519MB | 99% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Compute processes: GPU Memory | | GPU PID Process name Usage | |=============================================================================| | No running compute processes found | +-----------------------------------------------------------------------------+
3)那么现在就需要对这个文件中的关键信息进行提取,然后形成JSON格式的字符串返回。
运行程序:
学习一下nohup命令:
nohup 的用途就是让提交的命令忽略 hangup 信号,让程序在后台运行。nohup 的使用是十分方便的,只需在要处理的命令前加上 nohup 即可,标准输出和标准错误缺省会被重定向到 nohup.out 文件中。一般我们可在结尾加上"&"来将命令同时放入后台运行,也可用">filename 2>&1"
来更改缺省的重定向文件名。
[root@pvcent107 ~]# nohup ping www.ibm.com & [1] 3059 nohup: appending output to `nohup.out' [root@pvcent107 ~]# ps -ef |grep 3059 root 3059 984 0 21:06 pts/3 00:00:00 ping www.ibm.com root 3067 984 0 21:06 pts/3 00:00:00 grep 3059
端口映射:
可以观察到run.py中的ip=172.16.1.20(serverA),port=8088。这个ip不能直接访问,需要跳板机(ip:http://172.21.7.224)才能访问,因此需要建立一个跳板机到serverA的一个映射,这样访问跳板机某个端口的时候就相当于去访问serverA的某个端口对应的应用程序。
那么在服务器程序启动的情况下,就可以通过网页进行IP访问了:
成功得到了数据:)
作者:忆之独秀
邮箱:leaguenew@qq.com
注明出处:http://blog.csdn.net/lavorange/article/details/42684851
Python实现简单的http服务器程序