当我阅读asio源代码时,我很好奇asio是如何在线程之间同步数据的,甚至是隐式链的制作。这些是 asio 中的代码:
io_service::运行
mutex::scoped_lock lock(mutex_);
std::size_t n = 0;
for (; do_run_one(lock, this_thread, ec); lock.lock())
if (n != (std::numeric_limits<std::size_t>::max)())
++n;
return n;
io_service::do_run_one
while (!stopped_)
{
if (!op_queue_.empty())
{
// Prepare to execute first handler from queue.
operation* o = op_queue_.front();
op_queue_.pop();
bool more_handlers = (!op_queue_.empty());
if (o == &task_operation_)
{
task_interrupted_ = more_handlers;
if (more_handlers && !one_thread_)
{
if (!wake_one_idle_thread_and_unlock(lock))
lock.unlock();
}
else
lock.unlock();
task_cleanup on_exit = { this, &lock, &this_thread };
(void)on_exit;
// Run the task. May throw an exception. Only block if the operation
// queue is empty and we're not polling, otherwise we want to return
// as soon as possible.
task_->run(!more_handlers, this_thread.private_op_queue);
}
else
{
std::size_t task_result = o->task_result_;
if (more_handlers && !one_thread_)
wake_one_thread_and_unlock(lock);
else
lock.unlock();
// Ensure the count of outstanding work is decremented on block exit.
work_cleanup on_exit = { this, &lock, &this_thread };
(void)on_exit;
// Complete the operation. May throw an exception. Deletes the object.
o->complete(*this, ec, task_result);
return 1;
}
}
在其do_run_one中,互斥锁的解锁都是在执行处理程序之前。如果存在隐式链,则handler不会并发执行,但问题是:线程A运行一个修改数据的handler,而线程B运行下一个读取线程A已修改数据的handler。没有互斥锁的保护,线程B如何看到线程A对数据所做的更改?在处理程序执行之前解锁互斥体不会在访问处理程序访问的数据的线程之间建立发生之前的关系。
当我更进一步时,处理程序执行使用一个名为 fenced_block 的东西:
completion_handler* h(static_cast<completion_handler*>(base));
ptr p = { boost::addressof(h->handler_), h, h };
BOOST_ASIO_HANDLER_COMPLETION((h));
// Make a copy of the handler so that the memory can be deallocated before
// the upcall is made. Even if we're not about to make an upcall, a
// sub-object of the handler may be the true owner of the memory associated
// with the handler. Consequently, a local copy of the handler is required
// to ensure that any owning sub-object remains valid until after we have
// deallocated the memory here.
Handler handler(BOOST_ASIO_MOVE_CAST(Handler)(h->handler_));
p.h = boost::addressof(handler);
p.reset();
// Make the upcall if required.
if (owner)
{
fenced_block b(fenced_block::half);
BOOST_ASIO_HANDLER_INVOCATION_BEGIN(());
boost_asio_handler_invoke_helpers::invoke(handler, handler);
BOOST_ASIO_HANDLER_INVOCATION_END;
}
这是什么?我知道fence似乎是C++11支持的同步原语,但这个fence完全是由asio自己编写的。这个fenced_block有助于完成数据同步的工作吗?
UPDATED
在我谷歌和阅读之后this and this,asio确实使用内存栅栏原语来同步线程中的数据,这比解锁更快,直到处理程序执行完成(x86 上的速度差异)。事实上,Java 易失性关键字是通过在写入变量之后和读取变量之前插入内存屏障来实现发生之前关系的。
如果有人可以简单地描述 asio 内存栅栏实现或添加我错过或误解的内容,我会接受它。