A common programming paradigm these days is asynchronous programming. Whether your software needs to run on the web or not, you’ll want to create software that scales well, whether to multiple cores or even multiple servers. Asynchronous programming is an important aspect of scalable, optimized software.
A fundamental issue in asynchronous programming is that you don’t write code that blocks; at a basic level, that means your code can perform one task while you’re waiting for another task, such as I/O, to complete.
Here’s an example. Suppose you perform a write to a database. Without asynchronous programming, you might just write code that does the write; the following line executes after the write is complete, like so:
With synchronous programming, the print line doesn’t run until the db.write is finished. That means that when the second line runs, you can expect that the data has been written to the database, and any response from the database is present. But what happens while the write is actually taking place? In single-core, single-thread computers, the computer may be devoted to the actual process of writing. But in today’s multicore computers, which typically allow for two threads per core, many cores may well be sitting idle, doing nothing as the write is taking place.
Why not take advantage of that time? With a language such as C, you would need to spawn a thread to do so. But with the help of modern languages and libraries, you don’t need to worry about threads, as that’s handled for you. Instead, you can just write code.
Python supports such asynchronous code. In an asynchronous approach, the second line in the above code will execute immediately after the write is initiated, and possibly (even likely) before the write is complete.
Of course, going down this route requires some care on your part. For example, if you’re doing a read from a database, with synchronous programming, you can do a read on one line, and in the following line assume either the data has been read, or perhaps an error occurred. But with asynchronous programming, you can’t make such an assumption. Instead, you have to write a separate function that runs after the database read takes place. Meanwhile, the lines following the database read may run before the read is complete.
In other words, you’re now dealing with two separate sequences of code, and you can’t be sure which one runs first. Instead, you have to write your code without assuming that one runs before the other.
Recent versions of Python provide native support for such programming techniques. But even then, it can get complex. That’s where libraries and frameworks can help. This is especially true in the case of web development.
When a web request comes in, you might have to do a database lookup to authenticate a user, and then a database lookup to get the user’s data. While such work is taking place, you could have another request come in. With asynchronous programming, you can handle the next request before the first request is finished.
The key to asynchronous programming and frameworks is understanding that the framework includes an underlying mechanism whereby chunks of execution get queued up. At the heart of the framework is what’s called an event loop. As you provide code that runs in response to an I/O event, that code gets queued up. But this isn’t precisely a queue by strict definition, since the framework will then run that code only after the I/O operation completes. Meanwhile, other code chunks can run.
As of version 3.4, Python’s standard library includes an asynchronous framework called asyncio. Some web frameworks also support such asynchronous approaches; others do not. We often call the former non-blocking frameworks, and the latter blocking frameworks. Without a doubt, if you’re doing web development, Tornado is the most popular framework that supports asynchronous programming.
Next: What About Non-Web? (Click below)