Unknown option: "-3"
Unix manual page for dispatch_async. (host=minya system=Darwin)
dispatch_async(3) BSD Library Functions Manual dispatch_async(3)
NAME
dispatch_async, dispatch_sync -- schedule blocks for execution
SYNOPSIS
#include <dispatch/dispatch.h>
void
dispatch_async(dispatch_queue_t queue, void (^block)(void));
void
dispatch_sync(dispatch_queue_t queue, void (^block)(void));
void
dispatch_async_f(dispatch_queue_t queue, void *context,
void (*function)(void *));
void
dispatch_sync_f(dispatch_queue_t queue, void *context,
void (*function)(void *));
DESCRIPTION
The dispatch_async() and dispatch_sync() functions schedule blocks for
concurrent execution within the dispatch(3) framework. Blocks are submit-
ted to a queue which dictates the policy for their execution. See
dispatch_queue_create(3) for more information about creating dispatch
queues.
These functions support efficient temporal synchronization, background
concurrency and data-level concurrency. These same functions can also be
used for efficient notification of the completion of asynchronous blocks
(a.k.a. callbacks).
TEMPORAL SYNCHRONIZATION
Synchronization is often required when multiple threads of execution
access shared data concurrently. The simplest form of synchronization is
mutual-exclusion (a lock), whereby different subsystems execute concur-
rently until a shared critical section is entered. In the pthread(3) fam-
ily of procedures, temporal synchronization is accomplished like so:
int r = pthread_mutex_lock(&my_lock);
assert(r == 0);
// critical section
r = pthread_mutex_unlock(&my_lock);
assert(r == 0);
The dispatch_sync() function may be used with a serial queue to accom-
plish the same style of synchronization. For example:
dispatch_sync(my_queue, ^{
// critical section
});
In addition to providing a more concise expression of synchronization,
this approach is less error prone as the critical section cannot be acci-
dentally left without restoring the queue to a reentrant state.
The dispatch_async() function may be used to implement deferred critical
sections when the result of the block is not needed locally. Deferred
critical sections have the same synchronization properties as the above
code, but are non-blocking and therefore more efficient to perform. For
example:
dispatch_async(my_queue, ^{
// critical section
});
BACKGROUND CONCURRENCY
dispatch_async() function may be used to execute trivial background tasks
on a global concurrent queue. For example:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0), ^{
// background operation
});
This approach is an efficient replacement for pthread_create(3).
COMPLETION CALLBACKS
Completion callbacks can be accomplished via nested calls to the
dispatch_async() function. It is important to remember to retain the des-
tination queue before the first call to dispatch_async(), and to release
that queue at the end of the completion callback to ensure the destina-
tion queue is not deallocated while the completion callback is pending.
For example:
void
async_read(object_t obj,
void *where, size_t bytes,
dispatch_queue_t destination_queue,
void (^reply_block)(ssize_t r, int err))
{
// There are better ways of doing async I/O.
// This is just an example of nested blocks.
dispatch_retain(destination_queue);
dispatch_async(obj->queue, ^{
ssize_t r = read(obj->fd, where, bytes);
int err = errno;
dispatch_async(destination_queue, ^{
reply_block(r, err);
});
dispatch_release(destination_queue);
});
}
RECURSIVE LOCKS
While dispatch_sync() can replace a lock, it cannot replace a recursive
lock. Unlike locks, queues support both asynchronous and synchronous
operations, and those operations are ordered by definition. A recursive
call to dispatch_sync() causes a simple deadlock as the currently execut-
ing block waits for the next block to complete, but the next block will
not start until the currently running block completes.
As the dispatch framework was designed, we studied recursive locks. We
found that the vast majority of recursive locks are deployed retroac-
tively when ill-defined lock hierarchies are discovered. As a conse-
quence, the adoption of recursive locks often mutates obvious bugs into
obscure ones. This study also revealed an insight: if reentrancy is
unavoidable, then reader/writer locks are preferable to recursive locks.
Disciplined use of reader/writer locks enable reentrancy only when reen-
trancy is safe (the "read" side of the lock).
Nevertheless, if it is absolutely necessary, what follows is an imperfect
way of implementing recursive locks using the dispatch framework:
void
sloppy_lock(object_t object, void (^block)(void))
{
if (object->owner == pthread_self()) {
return block();
}
dispatch_sync(object->queue, ^{
object->owner = pthread_self();
block();
object->owner = NULL;
});
}
The above example does not solve the case where queue A runs on thread X
which calls dispatch_sync() against queue B which runs on thread Y which
recursively calls dispatch_sync() against queue A, which deadlocks both
examples. This is bug-for-bug compatible with nontrivial pthread usage.
In fact, nontrivial reentrancy is impossible to support in recursive
locks once the ultimate level of reentrancy is deployed (IPC or RPC).
IMPLIED REFERENCES
Synchronous functions within the dispatch framework hold an implied ref-
erence on the target queue. In other words, the synchronous function bor-
rows the reference of the calling function (this is valid because the
calling function is blocked waiting for the result of the synchronous
function, and therefore cannot modify the reference count of the target
queue until after the synchronous function has returned). For example:
queue = dispatch_queue_create("com.example.queue", NULL);
assert(queue);
dispatch_sync(queue, ^{
do_something();
//dispatch_release(queue); // NOT SAFE -- dispatch_sync() is still using 'queue'
});
dispatch_release(queue); // SAFELY balanced outside of the block provided to dispatch_sync()
This is in contrast to asynchronous functions which must retain both the
block and target queue for the duration of the asynchronous operation (as
the calling function may immediately release its interest in these
objects).
FUNDAMENTALS
Conceptually, dispatch_sync() is a convenient wrapper around
dispatch_async() with the addition of a semaphore to wait for completion
of the block, and a wrapper around the block to signal its completion.
See dispatch_semaphore_create(3) for more information about dispatch sem-
aphores. The actual implementation of the dispatch_sync() function may be
optimized and differ from the above description.
The dispatch_async() function is a wrapper around dispatch_async_f().
The application-defined context parameter is passed to the function when
it is invoked on the target queue.
The dispatch_sync() function is a wrapper around dispatch_sync_f(). The
application-defined context parameter is passed to the function when it
is invoked on the target queue.
SEE ALSO
dispatch(3), dispatch_apply(3), dispatch_once(3),
dispatch_queue_create(3), dispatch_semaphore_create(3)
Darwin May 1, 2009 Darwin