Object lifetime and threading

Last time we talked about object lifetime and ownership. Naturally scopes and objects form a tree hierarchy. The root of the tree is the scope where the program starts executing. Beyond the tree structure, we can pass information between scopes with the help of dynamic lifetime. Dynamic lifetime is hard to manage and is also the #1 source of bugs ("use after free") in C programs. The concept of ownership can simplify many useful cases of dynamic lifetime.

Threads

Threads make lifetime more complex. We now have several starting points where threads might start executing independently. No assumption can be made about the progress of each thread. At one point in thread A, there is usually no guarantee whether an object in thread B has been initialized/destroyed or not. That makes data sharing between threads extremely difficult. As it turns out, making sure the data is alive while being shared is another hard problem to solve. Doing it wrong, our C program crashes and throws "core dumps" at us. There are many clever ways to guarantee liveness. But we are more interested in the foolproof ways that take advantage of single ownership.

Copy instead of share

Sharing implies more than one owner. Multiple owners are hard to coordinate with. Instead of sharing, we make copies of the data for each of the interested parties. Those copies will have independent lifetimes, each owned by one thread. That simplifies the situation because each copy has a single owner.

In the case of sharing between two parties, the actual copying can be saved if the source (initiator) of sharing does not need the object anymore. Then sharing is reduced to a simple "transfer ownership" operation.

There are other ways of describing this strategy. We can think of it as sending a message. The source of sharing makes a copy of the shared data, and sends it to the target as a message. A message, by definition, goes out of control of the source  after being sent. The target owns the message after receiving it. An RPC request from the source to the target achieves the same goal.

The famous paper Communicate Sequential Processes proposed a similar strategy. Shared data is considered an output of the source, and an input of the target. An output cannot be modified after the source outputs it. An input is solely owned by the target thread. To some extent, the input/output metaphor is similar to the messaging metaphor.

By avoiding sharing, we avoid the difficulties of managing shared lifetimes. The drawbacks are more memory usage and more CPU time to copy data.

Reference counting

Reference counting is widely adopted as a native feature in many programming languages. For each piece of data, we keep a count of how many outstanding references there are. The data dies when there are no more references out there.

The advantage of reference counting is that it can be fully automatic. The programmer no longer needs to manage lifetime manually. The drawback is that it is usually unpredictable when the underlying object dies, or who will end up cleaning up the object. This can be a problem in languages that use RAII extensively like C++. Sometimes it is important to not run certain deconstructors on certain threads.

Speaking of C++, shared_ptr is the tool that implemented the reference counting strategy. unique_ptr is often listed side-by-side with shared_ptr. They happen to correspond to the two strategies we talked about: unique_ptr is about message passing, while shared_ptr is about data sharing.

Carrier

The Carrier pattern improves upon reference counting and addresses the deconstruction problem. In this pattern, there is a Carrier<T> that owns an instance of any T. The carrier distributes references to the owned instance. References can be passed around and maybe used in other threads, and are guaranteed to be valid. When the shutdown procedure is started, the carrier stops producing references and waits for all references it gave out. Gradually other parties drop their references after receiving the shutdown signal. After all references are dropped, the instance of T is no longer shared, but solely owned by the carrier. We can then drop the instance or run cleanups that require an owned instance.

Careful planning

There are really clever ways to manage object lifetime by planning very carefully. For example, for each function, we should be very clear about who is responsible for cleaning up the objects involved in the function call. While it can be done, I would not recommend implementing those cleverness regularly. Try to fit your use cases into one of the regular ones. If nothing works, maybe you should roll your own.

Thread Safety

Note none of these strategies help with thread safety of the object being shared. Thread safety is about

  • If one thread reads the shared data, could it see partially updated / invalid data? Could the data change while the thread is executing?
  • If one thread writes the shared data, could its writes be observed partially by other threads? Could its writes be overridden partially by other threads?

Even the copying strategy is vulnerable to objects that are not thread safe. That is because objects can have references to other objects that are not deeply copied. The inner objects are still shared by all threads, although each of the outer objects has only one owner.

Generally speaking, an object that does not have mutable internal state is safe to share between threads. Any immutable reference to such objects is safe to send to another thread. If you are familiar with Rust, I believe those are the definitions of Sync and Send.

Conclusion

Managing lifetimes across threads is hard. There are clever ways to coordinate between threads. By preferring single ownership, we found three simple but powerful strategies for special use cases.

Show Comments