python - better solution for setting max threads to hold main thread? -


i have web server connecting 1 of many serverlets. web server might queue 40 jobs can take 20 mins or 30 hours run each.

the web server connects serverlet using sockets , serverlet runs job sent through using threads.

i want put cap on number of threads (jobs) can run @ once, 3 , once limit reached holds main thread. when 1 of threads ends allows main thread continue , pickup job.

# wait thread count reduce before continuing while threading.active_count() >= self.max_threads:     pass  

i'm using loop make main thread wait until free thread available. works, feels quick , dirty solution. wonder if there might better way it?

server.py

import socket import sys import urllib, urllib2 import threading import cpickle  supply import supply   class supply_thread(threading.thread):      def __init__(self, _sock):         threading.thread.__init__(self)         self.__socket = _sock      def run(self):         data = self.readline()         self.__socket.close()         new_supply = supply.supply(data)         new_supply.run()      def readline(self):         """ read data sent webserver , decode """          data = self.__socket.recv( 1024 )         if data:             data = cpickle.loads(data)             return data    class server:      def __init__(self):         ## socket vars         self.__socket = none         self.host = ''         self.port = 50007         self.name = socket.gethostname()          self.max_jobs = 3       def listen(self):         """ listen connection webserver """          self.__socket = socket.socket(socket.af_inet, socket.sock_stream)         # allows quick connection same address         self.__socket.setsockopt(socket.sol_socket, socket.so_reuseaddr, 1)          self.__socket.bind((self.host, self.port))         return self.__socket.listen(1)      def connect(self):         webserver = self.__socket.accept()[0]         print 'connected by', webserver          new_thread = supply_thread(webserver)         print 'starting thread' , new_thread.getname()          new_thread.start()      def close(self):         return self.__socket.close()       def run(self):         import time          while true:             print(sys.version)              # wait connection webserver             self.listen()              time.sleep(3)              # let webserver know i'm avilable             self.status(status='available')              print 'waiting connection...'             self.connect()              print 'thread count:', threading.enumerate()             print 'thread count:', threading.active_count()              while threading.active_count() >= self.max_jobs:                 pass       def status(self, status='available'):         computer_name = socket.gethostname()         svcurl = "http://localhost:8000/init/default/server"         params = {             'computer_name':computer_name,             'status':status,             'max_jobs':self.max_jobs         }         svchandle = urllib2.urlopen(svcurl, urllib.urlencode(params)) 

this sounds use case celery.

basically, create celery task in tasks.py file , call taskname.delay(). dispatch task celery worker , start working on if worker ready accept task. here's an example docs.

by default, celery create worker has concurrency equal number of cores in machine according documentation. can change if need to.

alternatively, use queues. here's example of how might look.


Comments

Popular posts from this blog

linux - xterm copying to CLIPBOARD using copy-selection causes automatic updating of CLIPBOARD upon mouse selection -

c++ - qgraphicsview horizontal scrolling always has a vertical delta -