Lets say my software must do a sequence of consecutive SELECTs (to completely different tables) to gather varied bits of data from the db, and we do not need any of these tables to alter whereas we’re amassing the info.
With postgresql one can use “repeatable learn” or “serializable” isolation ranges, nevertheless if one other transaction commits modifications to a desk we’ve not referenced but we’ll see them even when our transaction had began already, as the next sequence of actions reveals:
T1: BEGIN ISOLATION LEVEL SERIALIZABLE; # think about right here desk t has 10 rows T2: INSERT INTO t VALUES(1, 2, 3); T1: SELECT COUNT FROM t; # we'll see 11 traces for the remainder of the transaction
Nonetheless, if T1 accessed t earlier than T2 did the insert, it will see 10 traces at some stage in the entire transaction:
T1: BEGIN ISOLATION LEVEL SERIALIZABLE; # think about right here desk t has 10 rows T1: SELECT COUNT FROM t; # we'll see 10 traces for the remainder of the transaction T2: INSERT INTO t VALUES(1, 2, 3); T1: SELECT COUNT FROM t; # nonetheless sees 10 traces and many others.
With the above conduct, if we have to entry a number of tables throughout the transaction, lots of them could change within the time interval between the start of the transaction and the second we entry them, and we’ll see these modifications which we do not wish to see.
I perceive that is the way in which isolation ranges are speculated to work, so no clarification is required right here.
However then, is there a technique to have some form of “snapshot” beginning at a given time limit? Are express locks wanted on this case?