![]() ![]() ![]() This includes using asynchronous I/O and putting your log files on raw devices or an equivalent, such as the Veritas Quick I/O, that is serviced by dedicated I/O controllers-or better yet, using solid state disks for the log files. If this is your scenario, then about the only thing you can do in the database is to ensure a smooth I/O path for the LGWR process. ![]() The high system-level wait time may be driven by many short transactions from OLTP sessions that actively log in and out of the database. In OLTP databases, you normally notice a high log file sync wait time at the system level (V$SYSTEM_EVENT) but not at the session level. This will show you the systemwide commit behavior. Alternatively, you can mine the redo log files with Oracle Log Miner. It may give you an idea of the commit frequency. Look for the log file sync event in the trace file. You have to trace the session with event 10046 and observe the application behavior over time. If the session that spends a lot of time on the log file sync event is a persistent connection from a middle-tier layer, then this is a tough case because it services many front-end users. The commit statement that is inside a loop may need to be moved out of the loop so that the job commits only once instead of once in every loop iteration. Therefore, make the necessary adjustments in the application so that it only commits at the end of a unit of work. Updates to the rollback segment headers must also be recorded in the log buffer because the blocks are dirtied. ![]() The transaction table must also be updated at the end of each transaction, followed by a commit cleanout activity. At the beginning of each transaction, Oracle assigns a rollback segment (called a rollback segment binding) and updates the transaction table in the rollback segment header. A high commit frequency also increases the overhead that is associated with starting and ending transactions. Introducing additional commits can create other problems, among them, the infamous ORA-01555: snapshot too old error because the rollback (or undo) data can be overwritten. (If you are using the Automatic Undo Management (introduced in Oracle9i Database), then allocate enough space to the undo tablespace and set an appropriate undo retention time.) If none of the existing rollback segments can handle the unit of work, then you as the DBA should provide one that will. That is treating the symptom and not the problem. Do not introduce additional commits for the sake of rollback segments space or deadlocks. The proper place for a commit or rollback is at the end of each unit of work. A unit of work should either succeed or fail in its entirety. The right thing to do is to properly define what a transaction is and commit at the end of each transaction. Naturally, they become commit-happy people. Those who have been bitten by deadlocks may have been told to commit more frequently. Some application developers have learned that if they commit infrequently, jobs may fail due to rollback segments running out of space, and they get calls in the middle of the night. This is an application issue, and the solution is simply to eliminate unnecessary commits and reduce the overall commit frequency. Discover the module name and ask the developer to review the code to see if the number of commits can be reduced. If the session belongs to a batch process, it may be committing each database change inside a loop. Find out if the session that spends a lot of time on the log file sync event belongs to a batch or OLTP process or if it is a middle-tier (Tuxedo, Weblogic, etc.) persistent connection. High commit frequency is the number one cause for foreground log file sync waits. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |