Programming language Python Async – Asyncio

Page 1 of 3

Asyncio is just one way to do async in Python, but it’s an important way. Getting started with asyncio is difficult because of the abundance of coroutines, tasks and futures, how they differ and how they are used.

Find out more in this excerpt from my new book Python programmer: Async

Python programmer:
asynchronous
Threads, operations, asynchrony and more

Now available as a printed book: Amazon

pythonContents

1) A sneak peek into Python

Python Origins, Basic Python, Data Structures, Control Structures – Loops, Space Issues, Conditionals and Indentation, Pattern Matching, Everything Object – References, Functions, Objects and Classes, Inheritance, Basic and Modules, IDEs for Python, Pythonic – The Meta Philosophy , Where next, summary.

2) asynchronous annotation

single thread, Processes, I/O-Bound and CPU-Bound, threads, lock, deadlock, multi-threaded processes, asynchronous single-threaded, events, events or threads, callback hell, more than one CPU – concurrency, summary.

3) Process-based parallelism

Process class, Demon, Waiting for processes, Waiting for first to complete, Computing Pi, Fork v Spawn, Forkserve, Control start method, Abstract.

4) threads

Thread Class, Threads and GIL, Thread Tools, Daemon Threads, Thread Waiting, Local Variables, Thread Local Storage, Pi Computing with Multiple Threads, I/O-Bound Threads, Sleep(0), Temporary Object, Summary.

5) Locks and pitfalls

Race conditions, Hardware or Heisenbug problem, Locks, Locks and processes, Deadlocks, Context-managed locks, Recursive locking, Signaling, Atomic operations, Atomic code, Lock-free code, Computing Pi with locks, Abstract.

6) Synchronization

Join, First End, Events, Barrier, Condition Object, Global Condition Object, Abstract.

7) Data sharing

Queue, Pipes, Thread Queues, Shared Memory, Shared Types, Raw Shared Memory, Shared Memory, Manager, Computing Pi, Abstract.

8) Process pool

Pending compilations, compute Pi using AsyncResult, Map_async, Starmap_async, instant results – imap, MapReduce, share, lock, and abstract.

9) Operations Managers

SyncManager, How Proxies Work, Lock, Pi Account with a Manager, Custom Managers, Custom Datatype, BaseProxy, Property Proxy, Remote Managers, RPC, Final Thoughts, Summary.

10) Sub-processes

Run a program, I/O, Popen, interact, undisabled to read tube, using child process, abstract.

11) futures contracts

Futures Implementers Example I/O-Bound Waiting for futures Callbacks made in the future Handling exceptions Locking and sharing data Process locking and transactions Using initializer to create shared Globals Using process manager to share resources Sharing contracts Futures and Deadlock, Computing Pi with Futures, Aggregator of Operations or Synchronous Futures, Summary.

12) Asyncio essential extract 1 Asyncio essential

Callbacks, Futures and Waits, Coroutines, Wait, Sleep Wait, Tasks, Order Execution, Tasks and Futures, Queuing Coroutines, Sequential and Concurrent, Cancel Tasks, Exception Handling, Shared Variables and Locks, Context Variables, Queues, Abstract.

13) Use Asyncio

streams download web page server web server ssl server use streams convert block to non block run in threads why not just use threads cpu bound tasks Asyncio based modules work with event last loops – Tkinter, sub-processes, abstract.

14) Low Level API

event loop, loop use, execution of tasks in processes, arithmetic Pi with async, network functions,
Transport and protocols, UDP server, UDP client, UDP broadcast, sockets, event loop execution, what makes a good asynchronous operation, summary.

Addon I Python in Visual Studio Code

So far we’ve looked at processes as a way to speed up CPU-related programs, and threads as a way to speed up I/O-related programs. In the following chapters, the focus changes to using a single thread to speed up I/O related operations. This uses an event queue or some other form of asynchronous programming based on collaborative scheduling. The basic idea is that you can use a single thread more efficiently if you simply arrange it to do something else instead of just waiting for the I/O to complete. That is, if you have a set of I/O bound tasks, one thread can manage all of them if you allow it to run other tasks while waiting for the others to complete the I/O.

Some argue that the alternative of allocating n threads, one for each I/O bound task, is actually slower than sharing a single thread between them all. This certainly applies to Python with a GIL that restricts threads to one per Python interpreter. If the GIL is removed in the future, it is likely that one thread for all I/O related tasks will be faster than one thread per task. There are examples of Python asyncio programs that deal with thousands of network connections with some problems but the limits in any particular case are obviously task and device dependent.

The key to keeping the thread busy is the event queue. This is a queue of jobs waiting to be run and the scheduler selects a job to run. Then it uses that thread so it has to wait for something when it releases the thread back to the event queue and another job is selected to run in the thread. The job that had to wait is added back to the event queue and gets a chance to run when the wait is over. This way, a single thread always has a job to keep it occupied. Note that if the thread empties the queue, it is waiting for something to do and that is the only time the thread waits.

In the rest of this chapter, the focus is on using the asyncio module and this single-threaded multitasking approach is a different mindset than previous approaches using multiple threads or processes. It not only introduces new approaches, but also poses new problems. It’s also worth realizing that asyncio focuses on network operations rather than being a general-purpose single-threaded asynchronous unit. In particular, it’s not an event handling system of the kind you might find as part of a typical GUI like Tkinter or Qt. This does not preclude its use as a general-purpose approach to asynchrony, but the main application in the mind of its creators is dealing with network connections.

In this basic Asyncio account we only use the high level API. This is the part that programmers using Asyncio have to limit themselves to rather than extend it. The deeper low-level API, which is the topic of Chapter 14, should only be used to build async based frameworks. Note that many Asyncio accounts were written before the high-level API was completed and therefore tend to use low-level functionality. Even worse, many of the examples and tutorials mix the use of high- and low-level functions simply because they haven’t caught up with best practices.

in the book but not included in this excerpt

  • Callbacks, futures and hold

coroutine

The main idea in sharing a single thread is the event loop, which is a basic collaborative scheduler. This is just a queue of tasks that will run on the thread whenever possible. However, this is based on the idea that the function can be suspended and restarted from where it had to wait. In a multi-threaded environment, this is nothing special because the thread can be suspended and restarted by the operating system. In a single-threaded environment, the thread must save its current state, start or resume work on another job and restore the state when you return to the previous job. A function that can be suspended and restarted in this way is generally called a ‘coroutine’.

Python originally supported coroutines via generators, return and return from. However, support for this has been removed in Python 3.10 and trying to understand coroutines via generators is no longer particularly useful. For the rest of this class, the class constructor coroutines are ignored.

Modern Python coroutines are generated using the async keyword:

async def myCo():
    print("Hello Coroutine World")
    return 42

If you call myCo, it does not execute its code, and instead returns a coroutine object that can execute the code. This is very similar to a generator that returns a generator object, but you cannot run a coroutine object directly. You have to run it in combination with the event loop. To do this, you can use low level functions to create a loop and then send it. However, it is much easier to use the asyncio.run method which creates and runs the event loop without knowing anything about it:

import asyncio
async def myCo():
    print("Hello Coroutine World")
    return 42

myCoObject=myCo() result= asyncio.run(myCoObject) print(result)

This launches the coroutine object and displays:

Hello Coroutine World
42

Instead of passing a coroutine object, the asyncio.run call is usually written as a single action:

result= asyncio.run(myCo())

Also note that you can pass parameters to coroutine:

import asyncio
async def myCo(myValue):
    print("Hello Coroutine World")
    return myValue
result= asyncio.run(myCo(42))
print(result)

It is also important to realize that asyncio.run runs myCo once and the thread does not return until myCo completes. While myCo is running, an event loop starts and if the thread is released, it starts any queued jobs before returning to myCo. In this sense, the asyncio.run call is where the asynchronous part of your program starts and you can think of it as starting the main asynchronous program.

Leave a Comment