Monday, March 7, 2011

Don't COMMIT Too Often

Log Writer or LGWR is Oracle instance’s background process which has the responsibility to read from the redo log buffer cache and write into the redo log files. LGWR writes to the redo log file
1) Every 3 seconds
2) When the redo log buffer is one-third full
3) Before DBWR (Database Writer) process writes the modified buffers to the disk.
4) When COMMIT is issued

Here my focus would be point 4. When a COMMIT is issued by a user, LWGR puts a commit record in the log buffer cache and then writes changes made by the transaction in the CURRENT redo log file (in all members of redo log group). Here the session issuing commit needs to wait for a small (yet ignorable) amount of time until LGWR confirms the corresponding server process that changes have been written to the redo log file and guarantees that the transactions is recoverable in case of a failure (instance/media). Remember that actual changed data blocks (buffers) may still be in the memory (buffer cache) and may be chosen to be written to the datafiles at some later point in time (checkpoint, incremental checkpoint etc.)

Point to note here is; a physical write on to the disk (redo log file) at each commit which means more physical writes and more burdens on the storage sub system. Although each single COMMIT by a session puts a very very small and ignorable burden on the storage, but if taken as a whole, if we have thousands of COMMITs being issues in a minute, certainly latency of storage may increase. This is the reason that it is not recommended by Oracle to use too many commits in your application code and you should try to COMMIT as less as possible. As explained above, committing data causes the data in the log buffer cache to be written to the redo log files which requires physical writes and along with it, a wait event "log file sync" occurs every time the data is written to the current redo log group. Although this is a background wait event, but still if you find in your top 5 wait event, you might need to look at your application code and do a discussion with your development team on these COMMIT statements embedded in your code. More commits, more “log file sync” wait event and eventually poisonous for the scalability of your system. Let’s have an example where we insert 100000 rows in a table with testing COMMIT after every insertion and then testing one COMMIT after all 100000 insertions and see the time difference. This time difference also proves that each commit further slows down the writings.

Enter user-name: scott@usgumsd
Enter password:

Connected to:
Oracle Database 11g Release 11.1.0.7.0 - 64bit Production

USGUMSD$SCOTT> create table test (id number);

Table created.

USGUMSD$SCOTT> set timing on

USGUMSD$SCOTT>   begin
  for i in 1..100000
  loop
  insert into test values(1);
  commit;
  end loop;
  end;
  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:16.73

USGUMSD$SCOTT> truncate table test;

Table truncated.

Elapsed: 00:00:00.09
USGUMSD$SCOTT> begin
  for i in 1..100000
  loop
  insert into test values(1);
  end loop;
  commit;
  end;
  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:12.12
USGUMSD$SCOTT>

Conclusion:
There is a difference of 4 seconds. If you have thousands of DMLs running simultaneously without taking care of too many commit, you can imagine how much burden it will have on your IO system and increase in "database time" in AWR




No comments: