求大神 帮忙 在 windos 下写 python socket 服务器。sqlite 多线程并发高并发的,3000以上。python2.7 的环境。

python编写的多线程Socket服务器模块
0人收藏此代码,
python编写的多线程Socket服务器模块
#!/usr/bin/env python
# -*- coding: utf8 -*-
# [SNIPPET_NAME: Threaded Server]
# [SNIPPET_CATEGORIES: Python Core, socket, threading]
# [SNIPPET_DESCRIPTION: Simple example of Python's socket and threading modules]
# [SNIPPET_DOCS: http://docs.python.org/library/socket.html, http://docs.python.org/library/threading.html]
# [SNIPPET_AUTHOR: Gonzalo
# [SNIPPET_LICENSE: GPL]
import sys
import socket
import threading
import time
QUIT = False
class ClientThread( threading.Thread ):
'''
Class that implements the client threads in this server
'''
def __init__( self, client_sock ):
'''
Initialize the object, save the socket that this thread will use.
'''
threading.Thread.__init__( self )
self.client = client_sock
def run( self ):
'''
Thread's main loop. Once this function returns, the thread is finished
'''
# Need to declare QUIT as global, since the method can change it
global QUIT
done = False
cmd = self.readline()
# Read data from the socket and process it
while not done:
if 'quit' == cmd :
self.writeline( 'Ok, bye' )
QUIT = True
done = True
elif 'bye' == cmd:
self.writeline( 'Ok, bye' )
done = True
self.writeline( self.name )
cmd = self.readline()
# Make sure the socket is closed once we're done with it
self.client.close()
def readline( self ):
'''
Helper function, reads up to 1024 chars from the socket, and returns
them as a string, all letters in lowercase, and without any end of line
markers '''
result = self.client.recv( 1024 )
if( None != result ):
result = result.strip().lower()
return result
def writeline( self, text ):
'''
Helper function, writes teh given string to the socket, with an end of
line marker appended at the end
'''
self.client.send( text.strip() + '\n' )
class Server:
'''
Server class. Opens up a socket and listens for incoming connections.
Every time a new connection arrives, it creates a new ClientThread thread
object and defers the processing of the connection to it.
'''
def __init__( self ):
self.sock = None
self.thread_list = []
def run( self ):
'''
Server main loop.
Creates the server (incoming) socket, and listens on it of incoming
connections. Once an incomming connection is deteceted, creates a
ClientThread to handle it, and goes back to listening mode.
'''
all_good = False
try_count = 0
# Attempt to open the socket
while not all_good:
if 3 & try_count:
# Tried more than 3 times, without success... Maybe the port
# is in use by another program
sys.exit( 1 )
# Create the socket
self.sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM )
# Bind it to the interface and port we want to listen on
self.sock.bind( ( '127.0.0.1', 5050 ) )
# Listen for incoming connections. This server can handle up to
# 5 simultaneous connections
self.sock.listen( 5 )
all_good = True
except socket.error, err:
# Could not bind on the interface and port, wait for 10 seconds
print 'Socket connection error... Waiting 10 seconds to retry.'
del self.sock
time.sleep( 10 )
try_count += 1
print &Server is listening for incoming connections.&
print &Try to connect through the command line, with:&
print &telnet localhost 5050&
print &and then type whatever you want.&
print &typing 'bye' finishes the thread, but not the server &,
print &(eg. you can quit telnet, run it again and get a different &,
print &thread name&
print &typing 'quit' finishes the server&
# NOTE - No need to declare QUIT as global, since the method never
changes its value
while not QUIT:
# Wait for half a second for incoming connections
self.sock.settimeout( 0.500 )
client = self.sock.accept()[0]
except socket.timeout:
# No connection detected, sleep for one second, then check
# if the global QUIT flag has been set
time.sleep( 1 )
print &Received quit command. Shutting down...&
# Create the ClientThread object and let it handle the incoming
# connection
new_thread = ClientThread( client )
print 'Incoming Connection. Started thread ',
print new_thread.getName()
self.thread_list.append( new_thread )
new_thread.start()
# Go over the list of threads, remove those that have finished
# (their run method has finished running) and wait for them
# to fully finish
for thread in self.thread_list:
if not thread.isAlive():
self.thread_list.remove( thread )
thread.join()
except KeyboardInterrupt:
print 'Ctrl+C pressed... Shutting Down'
except Exception, err:
print 'Exception caught: %s\nClosing...' % err
# Clear the list of threads, giving each thread 1 second to finish
# NOTE: There is no guarantee that the thread has finished in the
given time. You should always check if the thread isAlive() after
calling join() with a timeout paramenter to detect if the thread
did finish in the requested time
for thread in self.thread_list:
thread.join( 1.0 )
# Close the socket once we're done with it
self.sock.close()
if &__main__& == __name__:
server = Server()
server.run()
print &Terminated&
#该代码片段来自于: /codes/python/2356
相关代码片段:
最新Python代码片段
合作网站:#!/Library/Frameworks/Python.framework/Versions/2.5/bin/python# encoding: utf-8import sysimport stackless as SL def run_benchmark(n, m):
# print("&& Python 2.5.1, stackless 3.1b3 here (N=%d, M=%d)!\n" % (n, m))
firstP = cin = SL.channel()
for s in xrange(1, n):
cout = SL.channel()
# # print("*& s = %d" % (seqn, ))
t = SL.tasklet(loop)(seqn, cin, cout)
cin = cout
seqn = s+1
# # print("$& s = %d" % (seqn, ))
t = SL.tasklet(mloop)(seqn, cin)
for r in xrange(m-1, -1, -1):
# # print("+ sending Msg#
firstP.send(r)
SL.schedule()def loop(s, cin, cout):
while True:
r = cin.receive()
cout.send(r)
# print(": Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid(), s, r))
# print("* Proc: &%s&, Seq#: %s, Msg#: terminate!" % (pid(), s))
breakdef mloop(s, cin):
while True:
r = cin.receive()
# print("& Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid(), s, r))
# print("@ Proc: &%s&, Seq#: %s, ring terminated." % (pid(), s))
break def pid(): return repr(SL.getcurrent()).split()[-1][2:-1] if __name__ == '__main__':
run_benchmark(int(sys.argv[1]), int(sys.argv[2]))
5.2& ring_no_io_thread.py
#!/Library/Frameworks/Python.framework/Versions/2.5/bin/python# encoding: utf-8import sys, timeimport thread SLEEP_TIME = 0.0001 def run_benchmark(n, m):
# print("&& Python 2.5.1, stackless 3.1b3 here (N=%d, M=%d)!\n" % (n, m))
locks = [thread.allocate_lock() for i in xrange(n)]
firstP = cin = []
cin_lock_id = 0
for s in xrange(1, n):
cout_lock_id = s
# print("*& s = %d" % (seqn, ))
thread.start_new_thread(loop, (seqn, locks, cin, cin_lock_id, cout, cout_lock_id))
cin = cout
cin_lock_id = cout_lock_id
seqn = s+1
# print("$& s = %d" % (seqn, ))
thread.start_new_thread(mloop, (seqn, locks, cin, cin_lock_id))
for r in xrange(m-1, -1, -1):
# print("+ sending Msg#
lock = locks[0]
lock.acquire()
firstP.append(r)
lock.release()
time.sleep(SLEEP_TIME)
while True:
time.sleep(SLEEP_TIME)
passdef loop(s, locks, cin, cin_lock_id, cout, cout_lock_id):
while True:
lock = locks[cin_lock_id]
lock.acquire()
if len(cin) & 0:
r = cin.pop(0)
lock.release()
lock.release()
time.sleep(SLEEP_TIME)
lock = locks[cout_lock_id]
lock.acquire()
cout.append(r)
lock.release()
# print(": Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid(), s, r))
# print("* Proc: &%s&, Seq#: %s, Msg#: terminate!" % (pid(), s))
breakdef mloop(s, locks, cin, cin_lock_id):
while True:
lock = locks[cin_lock_id]
lock.acquire()
if len(cin) & 0:
r = cin.pop(0)
lock.release()
lock.release()
time.sleep(SLEEP_TIME)
# print("& Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid(), s, r))
# print("@ Proc: &%s&, Seq#: %s, ring terminated." % (pid(), s))
thread.interrupt_main() def pid(): return thread.get_ident() if __name__ == '__main__':
run_benchmark(int(sys.argv[1]), int(sys.argv[2]))
5.3& ring_no_io_queue.py
#!/Library/Frameworks/Python.framework/Versions/2.5/bin/python# encoding: utf-8import sysimport threading, Queue def run_benchmark(n, m):
# print("&& Python 2.5.1, stackless 3.1b3 here (N=%d, M=%d)!\n" % (n, m))
firstP = cin = Queue.Queue()
for s in xrange(1, n):
cout = Queue.Queue()
# print("*& s = %d" % (seqn, ))
t = Loop(seqn, cin, cout)
t.setDaemon(False)
cin = cout
seqn = s+1
# print("$& s = %d" % (seqn, ))
t = MLoop(seqn, cin)
t.setDaemon(False)
for r in xrange(m-1, -1, -1):
# print("+ sending Msg#
firstP.put(r)class Loop(threading.Thread):
def __init__(self, s, cin, cout):
threading.Thread.__init__(self)
self.cin = cin
self.cout = cout
self.s = s
def run(self):
while True:
r = self.cin.get()
self.cout.put(r)
# print(": Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid(), self.s, r))
# print("* Proc: &%s&, Seq#: %s, Msg#: terminate!" % (pid(), self.s))
breakclass MLoop(threading.Thread):
def __init__(self, s, cin):
threading.Thread.__init__(self)
self.cin = cin
self.s = s
def run(self):
while True:
r = self.cin.get()
# print("& Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid(), self.s, r))
# print("@ Proc: &%s&, Seq#: %s, ring terminated." % (pid(), self.s))
break def pid(): return threading.currentThread() if __name__ == '__main__':
run_benchmark(int(sys.argv[1]), int(sys.argv[2]))
5.4& ring_no_io_proc.py
#!/Library/Frameworks/Python.framework/Versions/2.5/bin/python# encoding: utf-8import sysimport processing, Queue def run_benchmark(n, m):
# print("&& Python 2.5.1, stackless 3.1b3 here (N=%d, M=%d)!\n" % (n, m))
firstP = cin = processing.Queue()
for s in xrange(1, n):
cout = processing.Queue()
# print("*& s = %d" % (seqn, ))
p = processing.Process(target = loop, args = [seqn, cin, cout])
cin = cout
seqn = s+1
# print("$& s = %d" % (seqn, ))
p = processing.Process(target = mloop, args = [seqn, cin])
for r in xrange(m-1, -1, -1):
# print("+ sending Msg#
firstP.put(r)
p.join()def loop(s, cin, cout):
while True:
r = cin.get()
cout.put(r)
# print(": Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid(), s, r))
# print("* Proc: &%s&, Seq#: %s, Msg#: terminate!" % (pid(), s))
breakdef mloop(s, cin):
while True:
r = cin.get()
# print("& Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid(), s, r))
# print("@ Proc: &%s&, Seq#: %s, ring terminated." % (pid(), s))
break def pid(): return processing.currentProcess() if __name__ == '__main__':
run_benchmark(int(sys.argv[1]), int(sys.argv[2]))
5.5& ring_no_io_greenlet.py
#!/Library/Frameworks/Python.framework/Versions/2.5/bin/python# encoding: utf-8import sysfrom py.magic import greenlet def run_benchmark(n, m):
# print("&& Python 2.5.1, stackless 3.1b3 here (N=%d, M=%d)!\n" % (n, m))
glets = [greenlet.getcurrent()]
for s in xrange(1, n):
glets.append(greenlet(loop))
# print("*& s = %d" % (seqn, ))
seqn = s+1
glets.append(greenlet(mloop))
# print("$& s = %d" % (seqn, ))
glets[-1].switch(seqn, glets)
for r in xrange(m-1, -1, -1):
# print("+ sending Msg#
glets[1].switch(r)def loop(s, glets):
previous = glets[s - 1]
next = glets[s + 1]
r = previous.switch(s - 1, glets)
r = previous.switch()
while True:
# print(": Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid("loop", s), s, r))
# print("* Proc: &%s&, Seq#: %s, Msg#: terminate!" % (pid("loop", s), s))
next.switch(r)
r = previous.switch()
next.switch(r)def mloop(s, glets):
previous = glets[s - 1]
r = previous.switch(s - 1, glets)
while True:
# print("& Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid("mloop", s), s, r))
# print("@ Proc: &%s&, Seq#: %s, ring terminated." % (pid("mloop", s), s))
r = previous.switch() def pid(func, s): return "&&%s(Greenlet-%d, started)&&" % (func, s) if __name__ == '__main__':
run_benchmark(int(sys.argv[1]), int(sys.argv[2]))
5.6& ring_no_io_eventlet.py
#!/Library/Frameworks/Python.framework/Versions/2.5/bin/python# encoding: utf-8import sysimport eventlet def run_benchmark(n, m):
# print("&& Python 2.5.1, stackless 3.1b3 here (N=%d, M=%d)!\n" % (n, m))
firstP = cin = eventlet.Queue()
for s in xrange(1, n):
cout = eventlet.Queue()
# print("*& s = %d" % (seqn, ))
eventlet.spawn_n(loop, seqn, cin, cout)
cin = cout
seqn = s+1
# print("$& s = %d" % (seqn, ))
for r in xrange(m-1, -1, -1):
# print("+ sending Msg#
firstP.put(r)
mloop(seqn, cin)def loop(s, cin, cout):
while True:
r = cin.get()
cout.put(r)
# print(": Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid(), s, r))
# print("* Proc: &%s&, Seq#: %s, Msg#: terminate!" % (pid(), s))
breakdef mloop(s, cin):
while True:
r = cin.get()
# print("& Proc: &%s&, Seq#: %s, Msg#: %s .." % (pid(), s, r))
# print("@ Proc: &%s&, Seq#: %s, ring terminated." % (pid(), s))
break def pid(): return eventlet.greenthread.getcurrent() if __name__ == '__main__':
run_benchmark(int(sys.argv[1]), int(sys.argv[2]))
随笔 - 676python后端开发:高并发异步uwsgi+web.py+gevent
python后端开发:高并发异步uwsgi+web.py+gevent
[摘要:为何用web.py? python的web框架有良多,比方webpy、flask、bottle等,然则为何我们选了webpy呢?念了很久,已果,硬要给说明,我念大概缘由有两个:第一个是兄弟项目组用we]
为什么用web.py? python的web框架有很多,比如webpy、flask、bottle等,但是为什么我们选了webpy呢?想了好久,未果,硬要给解释,我想可能原因有两个:第一个是兄弟项目组用webpy,被我们组拿来主义,直接用了;第二个是我可能当时不知道有其他框架,因为刚工作,知识面有限。但是不管怎么样,webpy还是好用的,所有API的URL和handler在一个文件中进行映射,可以很方便地查找某个handler是为了哪个API服务的。(webpy的其中一个作者是Aaron Swartz,这是个牛掰的小伙,英年早逝) wsgi,这是python开发中经常遇到词(以前只管用了,趁写博客之际,好好学习下细节)。 wsgi协议 WSGI的官方文档参考http://www.python.org/dev/peps/pep-3333/。WSGI是the Python Web Server Interface,它的作用是在协议之间进行转换。WSGI是一个桥梁,一边是web服务器(web server),一边是用户的应用(wsgi app)。但是这个桥梁功能非常简单,有时候还需要别的桥(wsgi middleware)进行帮忙处理,下图说明了wsgi这个桥梁的关系。
一个简单的WSGI应用:def simple_app(environ, start_response):
"""Simplest possible application object"""
status = '200 OK'
response_headers = [('Content-type', 'text/plain')]
start_response(status, response_headers)
return [HELLO_WORLD]
这个是最简单的WSGI应用,那么两个参数environ,start_response是什么?
evniron是一系列环境变量,参考https://www.python.org/dev/peps/pep-3333/#id24,用于表示HTTP请求信息。(为了让理解更具体一点,下表给出一个例子,这是uWSGI传递给wsgi app的值){
'wsgi.multiprocess': True,
'SCRIPT_NAME': '',
'REQUEST_METHOD': 'GET',
'UWSGI_ROUTER': 'http',
'SERVER_PROTOCOL': 'HTTP/1.1',
'QUERY_STRING': '',
'x-wsgiorg.fdevent.readable': &built-infunctionuwsgi_eventfd_read&,
'HTTP_USER_AGENT': 'curl/7.19.7(x86_64-unknown-linux-gnu)libcurl/7.19.7NSS/3.12.7.0zlib/1.2.3libidn/1.18libssh2/1.2.2',
'SERVER_NAME': 'localhost.localdomain',
'REMOTE_ADDR': '127.0.0.1',
'wsgi.url_scheme': 'http',
'SERVER_PORT': '7012',
'uwsgi.node': 'localhost.localdomain',
'uwsgi.core': 1023,
'x-wsgiorg.fdevent.timeout': None,
'wsgi.input': &uwsgi._Inputobjectat0x7f287dc81e88&,
'HTTP_HOST': '127.0.0.1: 7012',
'wsgi.multithread': False,
'REQUEST_URI': '/index.html',
'HTTP_ACCEPT': '*/*',
'wsgi.version': (1, 0),
'x-wsgiorg.fdevent.writable': &built-infunctionuwsgi_eventfd_write&,
'wsgi.run_once': False,
'wsgi.errors': &openfile'wsgi_errors', mode'w'at0x2fd46f0&,
'REMOTE_PORT': '56294',
'uwsgi.version': '1.9.10',
'wsgi.file_wrapper': &built-infunctionuwsgi_sendfile&,
'PATH_INFO': '/index.html'
start_response是个函数对象,参考https://www.python.org/dev/peps/pep-3333/#id26,其定义为为start_response(status, response_headers, exc_info=None),其作用是设置HTTP status码(如200 OK)和返回结果头部。
WSGI服务器
wsgi服务器就是为了构建一个让WSGI app运行的环境。运行时需要传入正确的参数,以及正确地返回结果,最终把结果返回客户端。工作流程大致是,获取客户端的request信息,封装到environ参数,然后把执行结果返回给客户端。
对于webpy而言,其内置了一个CherryPy服务器。CheckPy在运行时,采用多线程模式,主线程负责accept客户端请求,将请求放入一个request Queue里面,然后有N个子线程负责从Queue中取出request,然后处理后将结果返回给客户端。不过看起来,这个内置的服务器看起来是为了开发调试用,因为webpy并未开放这个默认容器的参数调节,例如线程数目,所以为了寻求高效地托管WSGI app,不建议用这个默认的容器。这可能也是使用uWSGI的原因,因为我们的线上系统有几个API的访问量很大,目前大概是250万次/天。(当然对于业务量不是很大的服务,可能这个默认的CheckPy也就够了)
我们知道,Python有把大锁GIL,会将多个线程退化为串行执行,所以一个多线程python进程,并不能充分使用多核CPU资源,所以对于Python进程,可能采用多进程部署方式比较有利于充分利用多核的CPU资源。
uWSGI就是这么一个项目,可以以多进程方式执行WSGI app,其工作模式为 1 master进程 + N worker进程(N*m线程),主进程负责accept客户端request,然后将请求转发给worker进程,因此最终是worker进程负责处理客户端request,这样很方便的将WSGI app以多进程方式进行部署。以下给出uwsgi响应客户端请求的执行流程图:
值得注意的是,在master进程接收到客户端请求时,以round-bin方式分发给worker进程,所以多个process在处理前端请求时,所承受的负载相对还是均衡的。(这是我测试时的经验,改天再扒一下uwsgi的源代码确认一下 TODO)。关于uWSGI的使用,可能并不是这里的重点,不再赘述。
(其实看到uWSGI的多进程模型,我想到了Nginx,它也是多进程模型,这个也很有意思,由此了解了thunder herd问题,扯远了,继续往下说)
考虑一个应用场景:client向serverM(uwsgi)发起一个HTTP请求,serverM在处理这次请求时,需要访问另一个服务器serverN,直到serverN返回数据,serverM才会返回结果给client,即wsgi app是同步的。假如serverM访问serverN花费时间比较久,那么若是client请求数量比较多的情况下,(N*m)线程都会被占用,可想而知,大容量下的并发处理能力就受(N*m)的限制。
如果碰上了这种情况,怎么解决呢?
(1)增大N,即worker的数量:在增加进程的数量的时候,进程是要消耗内存的,并且如果进程数量太多的情况下(并且进程均处于活跃状态),进程间的切换会消耗系统资源的,所以N并不是越大越好。一般情况下,可能将进程数目设置为CPU数量的2倍。
(2)增大m,即worker的线程数量:在创建线程的时候,最大能够多大呢?由于线程栈是要消耗内存的,因此线程的数量跟系统设置(virtual memory)和(stack size)有关。线程数量太大会不会不太好?(这个肯定不好,我答不上来 TODO)
由此在大并发需求的情况下,我了解到了C10K问题,并进一步学习到I/O的多路复用的epoll,可以避免阻塞调用在某个socket上。比如libevent就是封装了多个平台的高效地I/O多路复用方法,在linux上用的就是epoll。但是这里我们不讨论epoll或者libevent的使用,我们这里引入gevent模块。
gevent协程
gevent在使用时,跟thread的接口很像,但是thread是由操作系统负责调度,而gevent是用户态的“线程”,也叫协程。gevent的好处就是无需等待I/O,当发生I/O调用是,gevent会主动切换到另一gevent进行运行,这样在等待socket数据返回时,可以充分利用CPU资源。
在使用gevent内部实现:
1. gevent 协程切换使用greenlet项目,greenlet其实就是一个函数,及保存函数的上下文(也就是栈),greenlet的切换由应用程序自己控制,所以非常适合对于I/O型的应用程序,发生I/O时就切换,这样能够充分利用CPU资源。
2. gevent在监控socket事件时,使用了libevent,就是高级的epoll。
3. python中有个猴子补丁(monkey patch)的说法,在python进程中,python的函数都是对象,存在于进程的全局字典中,因此,开发者可以通过替换这些对象,来改变标准库函数的实现,这样还不用修改已有的应用程序。在gevent里,也有这样的monkey patch,通过gevent.monkey.patch_all()替换掉了标准库中阻塞的模块,这样不用修改应用程序就能充分享受gevent的优势了。(真是方便啊)
使用gevent需注意的问题
1. 无意识地引入阻塞模块
我们知道gevent通过monkey patch替换掉了标准库中阻塞的模块,但是有的时候可能我们会“无意识”地引入阻塞模块,例如MySQL-Python,pylibmc。这两个模块是通过C扩展程序实现的,都需要进行socket通信,由于调用的底层C的socket接口,所以超出了gevent的管控范围,这样就在使用这两个模块跟mysql或者memcached进行通信时,就退化为了阻塞调用。
这样一个应用场景:在一个gevent进程中,基于MySQL-Python模块,创建一个跟mysql有10个连接的连接池,当并发量大的情况下,我们期望这10个连接可以同时处理对mysql的10个请求。实际如我们期望的这样么?的确跟期望不一样,因为在conn.query()的时候,阻塞了进程,这样意味着同一时刻不可能同时有2个对mysql的访问,所以并发不起来了。如mysql响应很慢的话,那么整个process跟hang住了没什么两样(这个时候,便不如多线程的部署模式了,因为多个线程的话,一个线程hang住,另一个线程还是有机会执行的)。
如何解决:在这些需要考虑并发的效率的场景,尽量避免引入阻塞模块,可以考虑纯python实现的模块,例如MySQL-Python-&pymysql, pylibmc-&memcached(这个python实现的memcached client可能不太完备,例如一致性hash目前都没实现)。
2. gevent在遇到I/O访问时,会进行greenlet切换。若是某个greenlet需要占用大量计算,那么若是计算任务过多(激进一点,陷入死循环),可能会导致其他greenlet没有机会执行。若是一个gevent进程需要执行多个任务时,若某个任务计算过多,可能会影响其他任务的执行。例如我曾遇到一个进程中,采用生产者任务(统计数据,将结果放入内存)+消费者任务(将计算结果写入磁盘),然而当数据很大的时候,生产者任务占用大量的CPU资源,然而消费者任务不能及时将统计结果写入磁盘,即生产太快,消费太慢,这样内存占用越来越多,一度高达2G内存。所以鉴于此,需要根据任务的特点(I/O密集或者CPU密集),合理分配进程任务。
https://www.python.org/dev/peps/pep-3333/
http://www.oschina.net/question/12_26400
/articles/2aIZZb
以上为工作经验总结,在整理成文的时候,才发现有些知识点只是一知半解,所以需要继续完善该文。
感谢关注 Ithao123Python频道,是专门为互联网人打造的学习交流平台,全面满足互联网人工作与学习需求,更多互联网资讯尽在 IThao123!
Laravel是一套简洁、优雅的PHP Web开发框架(PHP Web Framework)。它可以让你从面条一样杂乱的代码中解脱出来;它可以帮你构建一个完美的网络APP,而且每行代码都可以简洁、富于表达力。
Hadoop是一个由Apache基金会所开发的分布式系统基础架构。
用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力进行高速运算和存储。
Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量(high throughput)来访问应用程序的数据,适合那些有着超大数据集(large data set)的应用程序。HDFS放宽了(relax)POSIX的要求,可以以流的形式访问(streaming access)文件系统中的数据。
Hadoop的框架最核心的设计就是:HDFS和MapReduce。HDFS为海量的数据提供了存储,则MapReduce为海量的数据提供了计算。
产品设计是互联网产品经理的核心能力,一个好的产品经理一定在产品设计方面有扎实的功底,本专题将从互联网产品设计的几个方面谈谈产品设计
随着国内互联网的发展,产品经理岗位需求大幅增加,在国内,从事产品工作的大部分岗位为产品经理,其实现实中,很多从事产品工作的岗位是不能称为产品经理,主要原因是对产品经理的职责不明确,那产品经理的职责有哪些,本专题将详细介绍产品经理的主要职责
IThao123周刊}

我要回帖

更多关于 shell 多线程并发 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信