13

I want to free my code from the 5 std::mutex::unlock calls per function in favor of std::lock_guard. But I have the problem, that I have to keep the mutex locked when entering asynchronous callbacks.

Take this code for example:

std::map<std::size_t, std::set<std::size_t>> my_map;

size_t bar1 = ...;
size_t bar2 = ...;

std::lock_guard guard(my_map); // lock the map
my_map[p].insert(10);
foo(bar1, [&my_map](const auto& p){
    my_map[p].insert(10);

    // here we can unlock
});

foo is computing sth and then asynchronously calling the given lambda function and passing a parameter e to it. I need my_map to be locked the whole time. Keep in mind, that this is just an code example which might not map the real problem, so please don't optimize my given code.

top 3 comments
sorted by: hot top controversial new old
[-] corristo@programming.dev 6 points 1 year ago* (last edited 1 year ago)

std::lock_guard and it's superior alternative std::scoped_lock both deliberately cannot be moved, so you won't be able to make it work with these types. But, assuming foo will cause the lambda to be destroyed once it is done executing you can use the more flexible std::unique_lock and move that into the lambda:

std::unique_lock lock(my_map);
my_map[p].insert(10);
foo(bar1, [&my_map, l = std::move(lock)] (const auto& p) {
      my_map[p].insert(10);
});

If the lambda does not capture anything whose destructor could call code that tries to lock this same mutex by value and foo destroys the lambda immediately after the lambda has finished executing then this does exactly what you want.

[-] __mk__@lemmy.blahaj.zone 4 points 1 year ago* (last edited 1 year ago)

A few points:

  1. you should (strictly) prefer std::scoped_lock over std::lock_guard.
  2. your scope locked takes an std::mutex, not a map (ie in its constructor)
  3. the lambda passed to foo is called a completion handler; one way to thread a bunch of (related) handlers without needing explicit locks is to use so-called strands. As long as all the operations which have to be performed serially are coroutines, spawned within strand in question, you can actually have a thread pool of executors running, and asio will take care of all the locking complexity for you.
  4. you're using p in the block as a whole, and within the completion handler, so be aware that the p outside has to be well-defined, and that the interior one (in the lambda) shadows the outer one. (I'm a fan of shadowing, btw, the company I used to have lint settings which yelled when shadowing happens, but for me it's one of the features I want, because it leads to more concise, uniform, clear names -- and that in turn is because shadowing allows them to be reused, but in the specific context... anyyyyway)
  5. modern C++ tends to favour async style code. Instead of passing a completion handler to foo, you make foo an awaitable functor which co_yields an index (p, above), one which we can co_await as in:
    // note: the below has to run in a coroutine 
    ...
    my_map[q].insert(10); // renamed outer p to q to avoid collision now that async-style leaves p in the same scope as outer code!
    const auto& p = co_await foo(bar1);
    // use p
    
  6. If you want to do 5 on existing code, follow the guidelines here to wrap legacy callback code into something that works with C++20 awaitables.
  7. If I may be so bold, you're describing something intrinsically async, so you may want to consider using boost::asio, then you get access to all this.
  8. and nowadays, all this is dead easy to install using conan
load more comments
view more: next ›
this post was submitted on 06 Jul 2023
13 points (100.0% liked)

C++

1732 readers
1 users here now

The center for all discussion and news regarding C++.

Rules

founded 1 year ago
MODERATORS