I have a big object (playing field) based on many sqlalchemy objects in my python application.
For each simple request (for example: move players) I must create this object and this is resource-intensive. I can cache a ready object in global variable.
But that works only if I use a single uwsgi thread.
cache = {}
def applicatio(field_id):
global cache
if field_id not in cache.keys():
field = get_field(field_id)
cache[field_id] = field
field.move_player()
I can not use uwsgi.cache
because my object is bigger than 64KB. And I am not sure that it good works for object based on connection to databases.
Also another problem exists - if two request arrive at the same time to different processes I can get collision.
Thus I want to map requests to same threads. In nginx I can use hash $arg_field_id;
But than I need to create many uwsgi processes with different sockets. IMHO it s a bad idea.
Can uwsgi realize this logic? or
Maybe does someone know how can i share objects between processes?
uWSGI caches can be as big as you want, just tune them as documented here:
http://uwsgi-docs.readthedocs.org/en/latest/Caching.html#cache2-options
Btw, it is normal you approach does not work with multiple processes, as a process by-definition does not share address space with others. You could use threads (with proper locking in place when you create the area), but really, uwsgi caches are way easier (and are shared by processes automatically), and have some property useful for you:
(a potential race condition case)
thread1: uwsgi.cache_set('foo', big_data)
thread2: uwsgi.cache_set('foo', big_data)
thread3: uwsgi.cache_set('foo', big_data)
only the first one to acquire the automatic cache lock will create the new object, the others two options will be a no-op
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments