<bdo id='pmN1K'></bdo><ul id='pmN1K'></ul>
  • <i id='pmN1K'><tr id='pmN1K'><dt id='pmN1K'><q id='pmN1K'><span id='pmN1K'><b id='pmN1K'><form id='pmN1K'><ins id='pmN1K'></ins><ul id='pmN1K'></ul><sub id='pmN1K'></sub></form><legend id='pmN1K'></legend><bdo id='pmN1K'><pre id='pmN1K'><center id='pmN1K'></center></pre></bdo></b><th id='pmN1K'></th></span></q></dt></tr></i><div id='pmN1K'><tfoot id='pmN1K'></tfoot><dl id='pmN1K'><fieldset id='pmN1K'></fieldset></dl></div>
    1. <small id='pmN1K'></small><noframes id='pmN1K'>

      <legend id='pmN1K'><style id='pmN1K'><dir id='pmN1K'><q id='pmN1K'></q></dir></style></legend>

    2. <tfoot id='pmN1K'></tfoot>

      1. 我如何获得废弃的 boost::interprocess::interprocess_mutex 的所有权?

        How do I take ownership of an abandoned boost::interprocess::interprocess_mutex?(我如何获得废弃的 boost::interprocess::interprocess_mutex 的所有权?)
        <i id='NgX7k'><tr id='NgX7k'><dt id='NgX7k'><q id='NgX7k'><span id='NgX7k'><b id='NgX7k'><form id='NgX7k'><ins id='NgX7k'></ins><ul id='NgX7k'></ul><sub id='NgX7k'></sub></form><legend id='NgX7k'></legend><bdo id='NgX7k'><pre id='NgX7k'><center id='NgX7k'></center></pre></bdo></b><th id='NgX7k'></th></span></q></dt></tr></i><div id='NgX7k'><tfoot id='NgX7k'></tfoot><dl id='NgX7k'><fieldset id='NgX7k'></fieldset></dl></div>

          • <bdo id='NgX7k'></bdo><ul id='NgX7k'></ul>
              <tbody id='NgX7k'></tbody>

              <small id='NgX7k'></small><noframes id='NgX7k'>

                <legend id='NgX7k'><style id='NgX7k'><dir id='NgX7k'><q id='NgX7k'></q></dir></style></legend>
                • <tfoot id='NgX7k'></tfoot>

                  本文介绍了我如何获得废弃的 boost::interprocess::interprocess_mutex 的所有权?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                  问题描述

                  我的场景:一台服务器和一些客户端(虽然不多).服务器一次只能响应一个客户端,所以必须排队.我正在使用互斥锁 (boost::interprocess::interprocess_mutex) 来执行此操作,并封装在 boost::interprocess::scoped_lock 中.

                  My scenario: one server and some clients (though not many). The server can only respond to one client at a time, so they must be queued up. I'm using a mutex (boost::interprocess::interprocess_mutex) to do this, wrapped in a boost::interprocess::scoped_lock.

                  问题是,如果一个客户端在持有互斥锁时意外死亡(即没有析构函数运行),其他客户端就会遇到麻烦,因为他们正在等待该互斥锁.我已经考虑使用定时等待,所以如果我的客户端等待,比如说,20 秒并且没有获得互斥锁,它会继续与服务器对话.

                  The thing is, if one client dies unexpectedly (i.e. no destructor runs) while holding the mutex, the other clients are in trouble, because they are waiting on that mutex. I've considered using timed wait, so if I client waits for, say, 20 seconds and doesn't get the mutex, it goes ahead and talks to the server anyway.

                  这种方法的问题:1)它每次都这样做.如果它处于循环中,不断与服务器通话,则每次都需要等待超时.2) 如果有三个客户端,其中一个在持有互斥锁时死掉,另外两个将只等待 20 秒并同时与服务器通话 - 这正是我试图避免的.

                  Problems with this approach: 1) it does this everytime. If it's in a loop, talking constantly to the server, it needs to wait for the timeout every single time. 2) If there are three clients, and one of them dies while holding the mutex, the other two will just wait 20 seconds and talk to the server at the same time - exactly what I was trying to avoid.

                  那么,我该如何对客户说:嘿,这个互斥锁似乎已被放弃,请拥有它的所有权"?

                  推荐答案

                  遗憾的是,boost::interprocess API 不支持原样.但是,您可以通过以下几种方法来实现它:

                  Unfortunately, this isn't supported by the boost::interprocess API as-is. There are a few ways you could implement it however:

                  如果您在支持 pthread_mutexattr_setrobust_np 的 POSIX 平台上,请编辑 boost/interprocess/sync/posix/thread_helpers.hpp 和 boost/interprocess/sync/posix/interprocess_mutex.hpp 以使用强大的互斥锁,并以某种方式处理 EOWNERDEAD从 pthread_mutex_lock 返回.

                  If you are on a POSIX platform with support for pthread_mutexattr_setrobust_np, edit boost/interprocess/sync/posix/thread_helpers.hpp and boost/interprocess/sync/posix/interprocess_mutex.hpp to use robust mutexes, and to handle somehow the EOWNERDEAD return from pthread_mutex_lock.

                  如果您在其他平台上,您可以编辑 boost/interprocess/sync/emulation/interprocess_mutex.hpp 以使用生成计数器,锁定标志位于较低位.然后你可以创建一个回收协议,它会在锁定字中设置一个标志来指示一个挂起的回收,然后在超时后进行比较和交换以检查相同的代仍然在锁定字中,如果是,则替换它具有锁定的下一代价值.

                  If you are on some other platform, you could edit boost/interprocess/sync/emulation/interprocess_mutex.hpp to use a generation counter, with the locked flag in the lower bit. Then you can create a reclaim protocol that will set a flag in the lock word to indicate a pending reclaim, then do a compare-and-swap after a timeout to check that the same generation is still in the lock word, and if so replace it with a locked next-generation value.

                  如果您使用的是 Windows,另一个不错的选择是使用本机互斥对象;无论如何,他们很可能比忙着等待更有效率.

                  If you're on windows, another good option would be to use native mutex objects; they'll likely be more efficient than busy-waiting anyway.

                  您可能还想重新考虑使用共享内存协议 - 为什么不改用网络协议?

                  You may also want to reconsider the use of a shared-memory protocol - why not use a network protocol instead?

                  这篇关于我如何获得废弃的 boost::interprocess::interprocess_mutex 的所有权?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                  本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

                  相关文档推荐

                  How to limit the number of running instances in C++(C++中如何限制运行实例的数量)
                  Using boost::asio::async_read with stdin?(将 boost::asio::async_read 与 stdin 一起使用?)
                  How to find out what dependencies (i.e other Boost libraries) a particular Boost library requires?(如何找出特定 Boost 库需要哪些依赖项(即其他 Boost 库)?)
                  What#39;s the purpose of a leading quot;::quot; in a C++ method call(引导“::的目的是什么?在 C++ 方法调用中)
                  Boost Spirit x3: parse into structs(Boost Spirit x3:解析为结构体)
                  How boost auto-linking makes choice?(boost自动链接如何做出选择?)

                  <i id='ZssNJ'><tr id='ZssNJ'><dt id='ZssNJ'><q id='ZssNJ'><span id='ZssNJ'><b id='ZssNJ'><form id='ZssNJ'><ins id='ZssNJ'></ins><ul id='ZssNJ'></ul><sub id='ZssNJ'></sub></form><legend id='ZssNJ'></legend><bdo id='ZssNJ'><pre id='ZssNJ'><center id='ZssNJ'></center></pre></bdo></b><th id='ZssNJ'></th></span></q></dt></tr></i><div id='ZssNJ'><tfoot id='ZssNJ'></tfoot><dl id='ZssNJ'><fieldset id='ZssNJ'></fieldset></dl></div>

                    <small id='ZssNJ'></small><noframes id='ZssNJ'>

                      <bdo id='ZssNJ'></bdo><ul id='ZssNJ'></ul>
                    • <legend id='ZssNJ'><style id='ZssNJ'><dir id='ZssNJ'><q id='ZssNJ'></q></dir></style></legend>

                      <tfoot id='ZssNJ'></tfoot>
                            <tbody id='ZssNJ'></tbody>