Introduction
In a concurrent application, it is normal that multiple threads read or write the same data or have access to the same file or database connection.
The solution for these problems comes with the concept of critical section.
A critical section is a block of code that accesses a shared resource and can't be executed by more than one thread at the same time.
When a thread wants access to a critical section, it uses one of those synchronization mechanisms to find out if there is any other thread executing the critical section. Otherwise, the thread is suspended by the sunchronization mechanism until the thread that is executing the critical section ends it.
Synchronizing a method
the use of the synchronized keyword to control the concurrent access to a method.
Only one execution thread will access one of the methods of an object declared with the synchronized keyword. If another thread tries to access any method declared with the synchronized keyword of the same object, it will suspended until the first thread finishes the execution of the method.
Static methods have a different behavior. Only one execution thread will access one of the static methods declared with the synchronized keyword, but another thread can access other non-static methods of an object of that class.
Two threads can access two different synchronized methods if one is static and the other one is not.
Only a thread can access the methods of an object that use the sychronized keyword in their declaration. If a thread(A) is executing a synchronized method and another thread(B) wants to execute other synchronized methods of the same object, it will be blocked until the thread(A) ends. But if threadB has access to different objects of the same class, none of them will be blocked.
The synchronized keyword penalizes the performance of the application, so you must only use it on methods that modify shared data in a concurrent environment.
If you have multiple threads calling a synchronized method, only one will execute them at a time while the others will be waiting. If the operation doesn't use the synchronized keyword, all the threads can execute the operation at the same time, reducing the total execution time.
If you know that a method will not be called by more than one thread, don't use the synchronized keyword.
We can use the synchronized keyword to protect the access to a block of code instead of an entire method. We should use the synchronized keyword in this way to protect the access to the shared data, leaving the rest of operations out of this block, obtaining a better performance of the application. The objective is to have the critical secion be as short as possible.
Normally, we will use the this
keyword to reference the object that is executing the method.
synchronized (this) {
// Java code
}
Arranging independent attributes in synchronized classes
When you use the synchronized keyword to protect a block of code, you must pass an object reference as a parameter. Normally, you will use the this keyword to reference the object that executes the method, but you can use other object references.
For example, if you have two independent attributes in a class shared by multiple threads, you must synchronize the access toe each variable, but there is no problem if there is one thread accessing one of the attributes and another thread accessing the other at the same time.
Using conditions in synchronized code
A classic problem in concurrent programming is the producer-consumer problem. We have a data buffer, one or more producers of data that save it in the buffer and one or more consumers of data that take it from the buffer.
As the buffer is a shared data structure, we have to control the access to it using a synchronization mechanism such as the synchronized keyword, but we have more limitations. A producer can't save data in the buffer if it's full and the consumer can't take data from the buffer if it's empty.
For these types of situations, Java provides the wait(), notify() and notifyAll() methods implemented in the Object class.
When the thread calls the wait()
method, the JVM puts the thread to sleep and release the obejct that controls the synchronized block of code that it's executing and allows the other threads to execute other blocks of synchronized code protected by that object. To wake up the thread, you must call the notify() or notifyAll() method inside a block of code protected by the same object.
Synchronizing a block of code with a Lock
based on the Lock interface and classes that implement it (as ReentrantLock). This mechanism presents some advantages, which are as follows:
- It allows the structuring of synchronized blocks in a more flexible way. With the synchronized keyword, you have to get and free the control over a synchronized block of code in a structured way. The Lock interfaces allow you to get more complex structures to implement your ciritical section.
- The Lock interfaces provide additional functionalities over the synchronized keyword. One of the new functionalities is implemented by the tryLock() method. With the synchronized keyword, when a threadA tries to execute a synchronized block of code, if there is another threadB executing it, the threadA is suspended until the threadB finishes the execution of the synchronized block. With locks, you can execute the tryLock() method. This meothod returns a Boolean value indicating if there is another thread running the code protected by this lock.
- The Lock interfaces allow a separation of read and write operations having multiple readers and only one modifier.
- The Lock interfaces offer better performance than the synchronized keyword.
When we want to implement a critical section using locks and guarantee that only one execution thread runs a block of code, we have to create a ReentrantLock object.
At the beginning of the critical section, we have to get the control of the lock using the lock() method. When a threadA calls this method, if no other thread has the control of the lock, the method gives the threadA the control of the lock and returns immediately to permit the execution of the critical secion to this thread.
Otherwise, if there is another thread B executing the critical section controlled by this lock, the lock() method puts the thread A to sleep until the thread B finishes the execution of the critical section.
At the end of the critical section, we have to use the unlock() method to free the control of the lock and allow the other threads to run this critical section. If you don't call the unlock() method at the end of the critical section, the other threads that are waiting for that block will be waiting forever, causing a deadlock situation. If you use try-catch blocks in you critical section, don't forget to put the sentence containing the unlock() method inside the finally section.
The Lock interface includes another method to get the control of the lock. It's the tryLock() method. The biggest difference with the lock() method is that this method, if the thread that uses it can't get the control of the Lock interface, returns immediately and doesn't put the thread to sleep.
Synchronizing data access with read/write locks
ReentrantReadWriteLock has two locks, one for read operations and one for write operations. There can be more than one thread using read operations simultaneously, but only one thread can be using write operations. When a thread is doing a write operation, there can't be any thread doing read operations.
Modifying Lock fairness
The constructor of the ReentrantLock and ReentrantReadWriteLock classes admits a boolean parameter named fair that allows you to control the behavior of both classes.
The false value is the default value and it's called the non-fair mode. When there are some threads waiting for a lock and the lock has to select one of them to get the access to the critial section, it selects one without any criteria.
The true value is called the fair mode. When there are some threads waiting for a lock and the lock has to select one to get access to a critical section, it selects the thread that has been waiting for the most time.
As the tryLock() method doesn't put the thread to sleep if the Lock interface is used, the fair attribute doesn't affect its funcionality.
While Thread 0 is running the first block of code protected by the lock, we have nine threads waiting to execute that block of code. When Thread 0 releases the lock, immediately, it requests the lock again, so we have 10 threads trying to get the lock. As the fair mode is enabled, the Lock interface will choose Thread 1, so it's the thread that has been waiting for more time for the lock.
Using multiple conditions in a Lock
A lock may be associated with one or more conditions. The purpose of these conditions is to allow threads to have control of a lock and check whether a condition is true or not and, if it's false, be suspended until another thread wakes them up.
All the Condition objects are associated with a lock and created using the newCondition() method declared in the Lock interface. Before we can do any operation with a condition, you have to have the control of the lock associated with the condition, so the operations with conditions must by in a block of code that begins with a call to a lock() method of a Lock object and ends with an unlock() method of the same Lock object.
When a thread calls the await() method of a condition, it automatically frees the control of the lock, so that another thread can get it and begin the execution of the same, or another critical section protected by that lock.
When a thread calls the singal() or signalAll() methods of a condition, one or all of the theads that were waiting for that condition are woken up, but this doesn't guarantee that the condition that made them sleep is now true, so you must put the await() calls inside a while loop. You can't leave that loop until the condition is true. While the condition is false, you must call await() again.