- the lambda posted inside iasio::rond_robin will be shared by different
threads running io_service::run()
- the lambda must remain/executed by the thread
- re-introduction of run_svc() method
Given the necessity for fibers::asio::round_robin to share its ready queue
among all threads calling io_service::run() on the same io_service instance,
the capability to allow hop (or not) in the fibers::asio::yield mechanism is
redundant.
If the async operation invoked by the asio async function immediately calls
yield_handler_base::operator() even before control reaches
async_result_base::get(), which would suspend the calling fiber, the context*
bound by yield_handler_base's constructor is still the active() context. This
may not be passed to context::migrate(). It probably shouldn't be passed to
context::set_ready(), either.
The whole yield / yield_hop dichotomy becomes much easier to read and explain
if we stick to a single yield_t class. Since the intention is for a consumer
to pass canonical instances rather than manipulating that class in any other
way, we can instantiate it however we want.
This gets rid of lots of ugly redundant boost::asio::handler_type<>
specializations.
Introduce yield_base with subclasses yield_t and yield_hop_t, each with a
canonical instance yield and yield_hop. yield_base adds allow_hop_ bool to
communicate the distinction to yield_handler: yield_t sets false, yield_hop_t
sets true.
Extract common base class yield_handler_base from yield_handler<T> and
yield_handler<void>. In fact yield_handler_base is almost identical to
yield_handler<void>; yield_handler<T> adds value processing.
Instead of capturing just the error_code* from the passed yield_base instance,
capture the whole yield_base: both its error_code* and the new allow_hop_ bool.
yield_handler_base provides operator()(error_code) method. This operator()
sets a new completed_ bool so calling fiber need NOT suspend if the async
operation completes immediately. That bool must be defended with a mutex.
This operator() also avoids migrating a pinned_context, or when a caller
passes plain yield instead of yield_hop.
New wait() method suspends the calling fiber only if (! completed_).
Extract common base class async_result_base from async_result<T> and
async_result<void>. In fact async_result_base is almost identical to
async_result<void>; async_result<T> adds value processing.
Add handler_type<> specializations for new yield_base and yield_hop_t
completion token types.
Now that rqueue_ is an STL-compatible container,
priority_scheduler::awakened() can use std::find_if() to search for a context
with a lower priority.
Now that rqueue_ is an intrusive_list, priority_scheduler::property_change()
need not search it: it can simply test with context::ready_is_linked(). Now
that it's a doubly-linked list, we can use context::ready_unlink() to unlink.
Now that method parameters have been renamed from 'f' to 'ctx', change all
references in comments accordingly.
Highlight predicate condition_variable::wait() method in condition_variable
front matter.
Rewrite the explanation of wait()'s Precondition.
Add a condition_variables subsection about no spurious condition_variable
wakeups. Remove "or spuriously" from wakeup conditions in wait*() methods.
First pass through "spurious wakeup" section in Rationale.
First pass through migration.qbk. Use lock_t throughout work_sharing.cpp,
instead of lock_t, lock_count and explicit std::unique_lock<std::mutex>
declarations. Unify treatment of main and dispatcher fibers.
Clarify thread-safety requirements on sched_algorithm::notify() and
suspend_until().
Clarify disable_interruption when rethrowing fiber_interrupted.
Consolidate future<T>::get(): returns T whether T is R, R& or void.
Mention nesting of disable_interruption (which matters) versus nesting of
restore_interruption (which doesn't). Mention that a disable_interruption
constructed within the scope of another disable_interruption is a no-op, both
itself and when passed to restore_interruption.
When packaged_task::operator()() stores a value or an exception, state "as if"
by promise::set_value() or set_exception(): the shared state is set ready.
Similarly for ~packaged_task() and ~promise() setting broken_promise.
Sprinkle links to the Allocator concept, std::allocator and
std::allocator_arg_t where referenced. Similarly for StackAllocator.
Add more cross-reference links where Fiber classes and methods are mentioned.
Also things like std::unique_lock and std::mutex.
Clarify error condition for value_pop() when channel is close()d.
Since fiber_specific_ptr::release() does not invoke cleanup, it should not
throw an exception raised during cleanup.
Note effect of BOOST_USE_SEGMENTED_STACKS if StackAllocator is not explicitly
passed.
Introduce function_heading_for QuickBook template to allow separate
descriptions of swap(fiber), swap(packaged_task) and swap(promise).
Document async() using C++14 std::result_of_t and std::decay_t, aligning with
std::async() documentation.
Rework when_any / when_all examples to use unbounded_channel throughout, since
we always close() the channel after the first value anyway. bounded_channel
doesn't really add much value here.
Make wait_first_outcome_impl() infer its channel pointer type. That way we can
reuse that function instead of coding a separate wait_all_until_error_impl(),
which differs only in using the nchannel facade instead of directly pushing to
unbounded_channel.
Explain use of std::bind() to bind a lambda.
Use a more nuanced discussion of promise lifetime in write_ec() example
function.
Use condition_variable::wait(lock, predicate) in a couple places in
work_sharing.cpp example.
- instead of using scheduling_algorithm::has_ready_fibers() use
a atomic counter as termination indication
- scheduling_algorithm::has_ready_fibers() returns true even if
only main-context and dispatcher-context are ready -> false indication
for work-sharing example
There was a bug when the ready queue wasn't empty, but there was no
lower-priority fiber already in the queue. In that case the fiber wouldn't be
inserted. We want the loop just to advance the iterator, but to perform the
insert regardless of where the iterator ends up. (With this logic, empty() is
no longer a special case.)
Restore the ~Verbose() message.