Friday, March 30, 2018

Does uncommitted transaction being written to datafiles?

Does uncommitted transaction being written to datafiles?

Uncommitted changes are written to disk constantly, in all circumstances, it is very normal.


Well, just to clarify, log writer is writing committed and uncommitted transactions from the redo log buffer to the log files more or less continuously - not just on commit (when the log buffer is 10mb full, 1/3 full, every 3 seconds or every commit - whichever comes first - those all trigger redo writes).

And DBWR is more or less continuously checkpointing these days - with fast start mttr target and the other log related parameters, dbwr is flushing dirty blocks to disk more or less "all of the time". There is this major event called a checkpoint, but it is more of a marker than anything else since the work is ongoing.

1) because you can do transactions that are much larger than available memory - there isn't room to buffer it all. That and the fact that if you saved it all up, a commit could take a very very very long time - we want to get ahead of the curve.

You can have transactions that are much larger than available memory.

You can have transactions that are much larger than available online redo log (which leads to #2 in a second)

You want commit times (we optimize for commit, we do not plan on rolling back, rolling back is the EXCEPTION, not the rule, commits are the rule) to be flat - even - regardless of transaction size (it takes about the same amount of time to commit 1 row update as it does a 1,000,000 row update)


2) DBWR will write out dirty blocks onto disk in order to

a) allow an online redo log file to be reused, if it did not put the block safely onto disk - we could not reuse the redo log file (that contains the redo to redo the changes). If we cannot reuse the redo log file - we would STOP DEAD. Hence, since stopping dead in our tracks is a bad thing, we put the dirty blocks onto disk so we can reuse the redo information.

b) make more space in the buffer cache, you can have modifications that exceed the available SGA by many times. Also, we are multi-user so many people can be doing things at once. We cannot be constrained by memory limitations

c) limit the time it takes to recover from a crash. Say we were doing a big update and updated ton of information. We commit. We crash immediately after the commit. If DBWR left all of the dirty blocks in the cache, we'd have to wait during startup for that transaction to be replayed - that could take a long time. In order to reduce the mean time to repair the database after a crash - we have things like the fast_start_mttr_target
hat limits the amount of dirty blocks in the cache that would need recovery after a failure.

No comments:

Post a Comment