Technically speaking it is not possible to set global variables in the way you are thinking with multiprocessing since each process is completely independent. Each process basically makes its own copy of __main_ and has its own copy of the global variables. Thus, as counterintuitive as it is, each process is running with its own copy of the global variables and when any process makes its own update to global variables, it is only updating its personal copy of it and not impacting other process's global vars. I have ran into the same problem often and basically have four solutions that have worked for me, I will label these Great, Good, Bad, Ugly:
1. The Great: use multithreading not multiprocessing:
Processes are all independent from one another and cannot share anything with each other in a nice "direct" way as you are attempting here. Threads on the other hand do not make their own copies of __main_ and therefore share all globals. While there are many use cases the differences between processes and threads really matters for technical reasons, I cannot see anything in your example code which necessitates one over the other.
edit: OP asked a good question in the comments that made me realize I forgot to elaborate on how to do this, the rest of this ("1. The Great") section elaborates on processes vs threads:
In order to do multithreading, you will want to use 'threading' from the standard library. To see examples of how to use it you can see the docs here: docs.python.org/3/library/threading.html. You often do not need any types of Pooling objects since you just establish and kick off as many threads as you want. You can learn what a process is and thread is here: superfastpython.com/thread-vs-process. As an aside, if you are doing a lot of large computational work the superfastpython site also has a lot of other interesting resources :)
As an example of how to do multithreading, here is a direct copy of code from the standard library threading page I just linked:
# The code below shows how to run multiple threads and is
# copied from docs.python.org/3/library/threading.html
# in the python 3.13 version
import threading
import time
def crawl(link, delay=3):
print(f"crawl started for {link}")
time.sleep(delay) # Blocking I/O (simulating a network request)
print(f"crawl ended for {link}")
links = [
"https://python.org",
"https://docs.python.org",
"https://peps.python.org",
]
# Start threads for each link
threads = []
for link in links:
# Using `args` to pass positional arguments and `kwargs` for keyword arguments
t = threading.Thread(target=crawl, args=(link,), kwargs={"delay": 2})
threads.append(t)
# Start each thread
for t in threads:
t.start()
# Wait for all threads to finish
for t in threads:
t.join()
2. The Good: Multiprocess Manager:
A Multiprocess manager is basically an overseer of all processes that stores objects in itself, you can read more about managemer objects in the docs:
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Manager
3. The Bad: Process - Queues and Pipes:
To handle this issue that is very common the multiprocessesing API offers some solutions, the two of which most common are Queues and Pipes. You can read more about these and see how to use their variations (there are a handful of flavors) in the docs. These structures function a bit different than Managers:
https://docs.python.org/3/library/multiprocessing.html#pipes-and-queues
4. The Ugly: Off site dump:
Let me be clear Managers, Queues, and Pipes are the recommended/pythonic way to pass info between processes. However, they can be pretty annoying and cumbersome to deal with for small amounts of data passage, at least in my experience. While I called this solution 'The Ugly' I did save the best for last! In this case you can just use some space in memory or storage that is not a part of the __main_ thread, normally just a file or a small sqlite volatile table. This one is ugly because it is susceptible to race conditions, whereas Managers, Queues, and Pipes generally are not. However, reading/writing to a file or table is a very agnostic and repeatable way to handle this issue that is extremely easy to reuse. The race conditions issue is pretty minor on modern processors and for smaller number of process/data shared between them. Though if it is a concern then you can also use sqlite tables with WAL journaling.
Normally if I need to share data between these I just use multithreading instead. Since I have a large amount of code from doing this before, spinning up a sqlite db is considerably fast (you basically need to build this out once).
Hope this helps! I am happy to edit this answer with any follow up questions you may have :)