Oracle hang 大量的log file sync 和Buffer Exterminate
进程
select a.EVENT,count(*) from v$session a group by a.EVENT
1 log file sync 449
2 SQL*Net message from client 203
3 enq: TX - row lock contention 1
4 SQL*Net break/reset to client 1
5 row cache lock 1
6 rdbms ipc message 4
7 pmon timer 1
8 latch free 1
9 cleanup of aborted process 1
10 latch: redo writing 2
11 buffer exterminate 48
12 enq: PR - contention 2
13 enq: SQ - contention 35
14 Streams AQ: waiting for time management or cleanup tasks 5
帖子经 user10731836编辑过
State of nodes
([nodenum]/cnode/sid/sess_srno/session/ospid/state/start/finish/[adjlist]/predecessor):
[0]/0/1/9438/0xdc7548b0/22169/NLEAF/1/4/[993]/none
[1]/0/2/56592/0xdf743dd0/23484/IGN_DMP/5/14/[1126]/none
[2]/0/3/42105/0xdc755ce8/24207/IGN/15/16/[1126]/none
[4]/0/5/4813/0xdc757120/25677/IGN_DMP/17/20/[429]/none
[5]/0/6/33840/0xdf746640/25783/IGN/21/22/[429]/none
[7]/0/8/33711/0xdf747a78/22599/IGN/23/24//none
[9]/0/10/3298/0xdf748eb0/22241/IGN/25/26//none
[12]/0/13/7575/0xdc75c200/25541/IGN/27/28/[429]/none
[13]/0/14/44594/0xdf74b720/25643/IGN/29/30/[429]/none
[14]/0/15/5377/0xdc75d638/23566/IGN/31/32/[1126]/none
[18]/0/19/36215/0xdc75fea8/25305/IGN/33/34/[429]/none
[19]/0/20/291/0xdf74f3c8/25339/IGN/35/36/[429]/none
[20]/0/21/62414/0xdc7612e0/22397/IGN/37/38//none
[21]/0/22/10832/0xdf750800/22637/IGN/39/40/[1126]/none
[23]/0/24/22403/0xdf751c38/25707/IGN/41/42/[429]/none
[25]/0/26/49869/0xdf753070/25418/IGN/43/44/[429]/none
[27]/0/28/43401/0xdf7544a8/22289/IGN/45/46//none
[29]/0/30/63602/0xdf7558e0/25509/IGN/47/48/[429]/none
[30]/0/31/44102/0xdc7677f8/22215/IGN/49/50//none
[31]/0/32/59135/0xdf756d18/23837/IGN/51/52/[1126]/none
[32]/0/33/13666/0xdc768c30/25333/IGN/53/54/[429]/none
[36]/0/37/23992/0xdc76b4a0/22209/IGN/55/56//none
[39]/0/40/11000/0xdf75bdf8/23891/IGN/57/58/[1126]/none
[40]/0/41/27214/0xdc76dd10/23975/IGN/59/60/[1126]/none
[42]/0/43/53314/0xdc76f148/25709/IGN/61/62/[429]/none
[45]/0/46/49651/0xdf75faa0/25737/IGN/63/64/[429]/none
[46]/0/47/42472/0xdc7719b8/25797/IGN/65/66/[429]/none
[49]/0/50/44503/0xdf762310/25087/IGN/67/68//none
[50]/0/51/19119/0xdc774228/22641/IGN_DMP/69/72/[549]/none
[51]/0/52/10012/0xdf763748/25813/IGN/73/74/[429]/none
[52]/0/53/26163/0xdc775660/23851/IGN/75/76//none
[53]/0/54/23519/0xdf764b80/25352/IGN/77/78/[429]/none
[54]/0/55/8911/0xdc776a98/22307/IGN/79/80//none
[58]/0/59/16077/0xdc779308/25565/IGN/81/82/[429]/none
[59]/0/60/47926/0xdf768828/25627/IGN/83/84/[429]/none
[60]/0/61/24262/0xdc77a740/25675/IGN/85/86/[429]/none
[64]/0/65/33671/0xdc77cfb0/22185/IGN/87/88/[1126]/none
[67]/0/68/28365/0xdf76d908/22601/IGN/89/90//none
[68]/0/69/13003/0xdc77f820/22549/IGN/91/92//none
[70]/0/71/5318/0xdc780c58/25639/IGN/93/94/[429]/none
[71]/0/72/26583/0xdf770178/25416/IGN/95/96/[429]/none
[72]/0/73/17584/0xdc782090/23362/IGN/97/98/[1126]/none
[76]/0/77/4123/0xdc784900/22325/IGN/99/100/[1126]/none
[77]/0/78/1857/0xdf773e20/23769/IGN/101/102//none
[79]/0/80/64588/0xdf775258/25711/IGN/103/104/[429]/none
[81]/0/82/373/0xdf776690/22249/IGN/105/106//none
[82]/0/83/20000/0xdc7885a8/22287/IGN/107/108/[549]/none
[84]/0/85/35408/0xdc7899e0/24505/IGN/109/110/[1126]/none
[85]/0/86/63617/0xdf778f00/22273/IGN/111/112//none
[86]/0/87/41142/0xdc78ae18/25285/SINGLE_NODE/113/114//none
[87]/0/88/1314/0xdf77a338/25803/IGN/115/116/[429]/none
[88]/0/89/22658/0xdc78c250/25095/IGN/117/118/[1126]/none
[89]/0/90/18817/0xdf77b770/24115/IGN/119/120/[1126]/none
[90]/0/91/37733/0xdc78d688/25681/IGN/121/122/[429]/none
[91]/0/92/8288/0xdf77cba8/25327/IGN/123/124/[429]/none
[93]/0/94/22259/0xdf77dfe0/23151/IGN/125/126/[1126]/none
[94]/0/95/43404/0xdc78fef8/25460/IGN/127/128/[429]/none
[95]/0/96/24277/0xdf77f418/22335/IGN/129/130/[1126]/none
[97]/0/98/5276/0xdf780850/25715/IGN/131/132/[429]/none
[98]/0/99/36896/0xdc792768/21873/IGN/133/134/[1126]/none
[100]/0/101/44196/0xdc793ba0/25619/IGN/135/136/[429]/none
[102]/0/103/60298/0xdc794fd8/25438/IGN/137/138/[429]/none
[103]/0/104/59238/0xdf7844f8/22225/IGN/139/140//none
[106]/0/107/44741/0xdc797848/22315/IGN/141/142//none
[107]/0/108/25523/0xdf786d68/23841/IGN/143/144/[1126]/none
[109]/0/110/30811/0xdf7881a0/25649/IGN/145/146/[429]/none
[111]/0/112/38952/0xdf7895d8/23561/IGN/147/148/[1126]/none
[112]/0/113/36048/0xdc79b4f0/25717/IGN/149/150/[429]/none
[113]/0/114/17864/0xdf78aa10/25380/IGN/151/152/[429]/none
[116]/0/117/63454/0xdc79dd60/22445/IGN/153/154//none
[117]/0/118/50482/0xdf78d280/25759/IGN/155/156/[429]/none
[119]/0/120/4493/0xdf78e6b8/25513/IGN/157/158/[429]/none
[120]/0/121/58635/0xdc7a05d0/24047/IGN/159/160/[1126]/none
[123]/0/124/3631/0xdf790f28/24213/IGN/161/162/[1126]/none
[127]/0/128/1031/0xdf793798/25749/IGN/163/164/[429]/none
[128]/0/129/60958/0xdc7a56b0/25821/IGN/165/166/[429]/none
[129]/0/130/54429/0xdf794bd0/22295/IGN/167/168//none
[130]/0/131/34304/0xdc7a6ae8/25713/IGN/169/170/[429]/none
[132]/0/133/55106/0xdc7a7f20/23596/IGN/171/172/[1126]/none
[133]/0/134/64234/0xdf797440/25362/IGN/173/174/[429]/none
[134]/0/135/4308/0xdc7a9358/23887/IGN/175/176//none
[135]/0/136/47850/0xdf798878/25398/IGN/177/178/[429]/none
[136]/0/137/9393/0xdc7aa790/25154/IGN/179/180/[1126]/none
[137]/0/138/46337/0xdf799cb0/22984/SINGLE_NODE/181/182//none
[140]/0/141/44709/0xdc7ad000/22117/IGN/183/184/[1126]/none
[141]/0/142/53618/0xdf79c520/25767/IGN/185/186/[429]/none
[142]/0/143/52925/0xdc7ae438/22275/IGN/187/188//none
[143]/0/144/49358/0xdf79d958/25659/IGN/189/190/[429]/none
[144]/0/145/20368/0xdc7af870/25468/IGN/191/192/[429]/none
[146]/0/147/17036/0xdc7b0ca8/22057/IGN/193/194/[1126]/none
[148]/0/149/57049/0xdc7b20e0/25297/IGN/195/196/[1126]/none
[149]/0/150/38466/0xdf7a1600/13729/SINGLE_NODE/197/198//none
[150]/0/151/48009/0xdc7b3518/23349/IGN/199/200/[1126]/none
[151]/0/152/7666/0xdf7a2a38/25563/IGN/201/202/[429]/none
[154]/0/155/7991/0xdc7b5d88/22199/IGN/203/204//none
[155]/0/156/37925/0xdf7a52a8/25313/IGN/205/206/[429]/none
[156]/0/157/864/0xdc7b71c0/24995/LEAF/207/208//163
[157]/0/158/13628/0xdf7a66e0/23883/IGN/209/210/[1126]/none
[158]/0/159/14300/0xdc7b85f8/25571/IGN/211/212/[429]/none
[159]/0/160/27016/0xdf7a7b18/25657/IGN/213/214/[429]/none
[161]/0/162/63060/0xdf7a8f50/22235/NLEAF/215/216/[993]/none
[163]/0/164/61480/0xdf7aa388/25089/NLEAF/217/218/[156]/none
[164]/0/165/10545/0xdc7bc2a0/23382/IGN/219/220/[1126]/none
[165]/0/166/55572/0xdf7ab7c0/24409/IGN/221/222/[1126]/none
[166]/0/167/22862/0xdc7bd6d8/22409/IGN/223/224//none
[168]/0/169/62001/0xdc7beb10/22976/IGN/225/226/[1126]/none
[170]/0/171/32561/0xdc7bff48/25583/IGN/227/228/[429]/none
[171]/0/172/14402/0xdf7af468/23775/IGN/229/230/[1126]/none
[172]/0/173/38158/0xdc7c1380/22447/IGN/231/232/[1126]/none
[173]/0/174/49829/0xdf7b08a0/24591/IGN/233/234/[1126]/none
[176]/0/177/27460/0xdc7c3bf0/25370/IGN/235/236/[429]/none
[177]/0/178/57900/0xdf7b3110/24527/IGN/237/238/[1126]/none
[178]/0/179/29624/0xdc7c5028/25501/IGN/239/240/[429]/none
[179]/0/180/52551/0xdf7b4548/22139/SINGLE_NODE/241/242//none
[180]/0/181/31071/0xdc7c6460/22265/NLEAF/243/244/[993]/none
[183]/0/184/5951/0xdf7b6db8/25390/IGN/245/246/[429]/none
[184]/0/185/31216/0xdc7c8cd0/24545/IGN/247/248/[1126]/none
[187]/0/188/52370/0xdf7b9628/25577/IGN/249/250/[429]/none
[190]/0/191/44844/0xdc7cc978/23903/IGN/251/252/[549]/none
[192]/0/193/6923/0xdc7cddb0/22297/IGN/253/254/[1126]/none
[193]/0/194/61128/0xdf7bd2d0/25481/IGN/255/256/[429]/none
[194]/0/195/62087/0xdc7cf1e8/23406/IGN/257/258//none
[195]/0/196/46671/0xdf7be708/25350/IGN/259/260/[429]/none
[196]/0/197/9913/0xdc7d0620/22137/IGN/261/262/[1126]/none
[198]/0/199/50992/0xdc7d1a58/23071/IGN/263/264//none
[204]/0/205/6204/0xdc7d5700/25609/IGN/265/266/[429]/none
[206]/0/207/25523/0xdc7d6b38/25755/IGN/267/268/[429]/none
[207]/0/208/26191/0xdf7c6058/22987/IGN/269/270//none
[208]/0/209/45813/0xdc7d7f70/22589/IGN/271/272/[549]/none
[209]/0/210/3985/0xdf7c7490/25366/IGN/273/274/[429]/none
[210]/0/211/39024/0xdc7d93a8/22189/IGN/275/276//none
[211]/0/212/58931/0xdf7c88c8/25479/IGN/277/278/[429]/none
[212]/0/213/31629/0xdc7da7e0/25505/IGN/279/280/[429]/none
[213]/0/214/36307/0xdf7c9d00/25691/IGN/281/282/[429]/none
[214]/0/215/54654/0xdc7dbc18/23454/IGN/283/284/[1126]/none
[215]/0/216/56991/0xdf7cb138/25651/IGN/285/286/[429]/none
[216]/0/217/13157/0xdc7dd050/25376/IGN/287/288/[429]/none
[217]/0/218/56210/0xdf7cc570/25456/IGN/289/290/[429]/none
[219]/0/220/16220/0xdf7cd9a8/24967/IGN/291/292/[1126]/none
[221]/0/222/5564/0xdf7cede0/25587/IGN/293/294/[429]/none
[222]/0/223/6252/0xdc7e0cf8/24005/IGN/295/296/[1126]/none
[223]/0/224/674/0xdf7d0218/24087/IGN/297/298/[1126]/none
[226]/0/227/34225/0xdc7e3568/25695/IGN/299/300/[429]/none
[227]/0/228/50250/0xdf7d2a88/22293/IGN/301/302/[1126]/none
[228]/0/229/2035/0xdc7e49a0/22631/NLEAF/303/306/[333]/none
[229]/0/230/42885/0xdf7d3ec0/24465/IGN/307/308/[1126]/none
[230]/0/231/56331/0xdc7e5dd8/22415/IGN/309/310//none
[231]/0/232/63376/0xdf7d52f8/23777/IGN/311/312/[1126]/none
[233]/0/234/6988/0xdf7d6730/22237/IGN/313/314/[549]/none
[234]/0/235/19888/0xdc7e8648/25331/IGN/315/316/[429]/none
[236]/0/237/11547/0xdc7e9a80/25761/IGN/317/318/[429]/none
[237]/0/238/29758/0xdf7d8fa0/22029/IGN/319/320/[1126]/none
[241]/0/242/2921/0xdf7db810/24993/IGN/321/322//none
[242]/0/243/3308/0xdc7ed728/23785/IGN/323/324/[1126]/none
[244]/0/245/41361/0xdc7eeb60/25739/IGN/325/326/[429]/none
[246]/0/247/39113/0xdc7eff98/25631/IGN/327/328/[429]/none
[248]/0/249/48404/0xdc7f13d0/23153/SINGLE_NODE/329/330//none
[249]/0/250/10325/0xdf7e08f0/25733/IGN/331/332/[429]/none
[251]/0/252/38421/0xdf7e1d28/23432/IGN/333/334/[1126]/none
[252]/0/253/26786/0xdc7f3c40/22365/IGN/335/336//none
[253]/0/254/25998/0xdf7e3160/23041/IGN/337/338/[1126]/none
[255]/0/256/30232/0xdf7e4598/22427/IGN/339/340//none
[257]/0/258/35051/0xdf7e59d0/22897/IGN/341/342/[1126]/none
[260]/0/261/62789/0xdc7f8d20/23580/IGN/343/344/[1126]/none
[262]/0/263/18420/0xdc7fa158/22603/NLEAF/345/346/[333]/none
[263]/0/264/49813/0xdf7e9678/25321/IGN/347/348/[429]/none
[264]/0/265/28824/0xdc7fb590/22607/IGN/349/350//none
[266]/0/267/22641/0xdc7fc9c8/22263/IGN/351/352//none
[267]/0/268/31020/0xdf7ebee8/25807/IGN/353/354/[429]/none
[268]/0/269/61295/0xdc7fde00/22281/IGN/355/356/[1126]/none
[269]/0/270/64886/0xdf7ed320/25743/IGN/357/358/[429]/none
[270]/0/271/30510/0xdc7ff238/25220/IGN/359/360/[1126]/none
[274]/0/275/46692/0xdc801aa8/25348/IGN/361/362/[429]/none
[278]/0/279/57449/0xdc804318/24963/IGN/363/364/[1126]/none
[279]/0/280/31255/0xdf7f3838/25613/IGN/365/366/[429]/none
[283]/0/284/63085/0xdf7f60a8/25791/IGN/367/368/[429]/none
[284]/0/285/52334/0xdc807fc0/25317/IGN/369/370/[429]/none
[285]/0/286/15414/0xdf7f74e0/22229/SINGLE_NODE/371/372//none
[286]/0/287/60364/0xdc8093f8/22609/IGN/373/374/[1126]/none
[287]/0/288/56766/0xdf7f8918/23781/IGN/375/376/[549]/none
[288]/0/289/10958/0xdc80a830/22255/IGN/377/378/[1126]/none
[293]/0/294/52270/0xdf7fc5c0/24259/IGN/379/380//none
[294]/0/295/25539/0xdc80e4d8/22269/IGN/381/382/[1126]/none
[295]/0/296/20563/0xdf7fd9f8/23965/IGN/383/384/[1126]/none
[296]/0/297/56200/0xdc80f910/22625/IGN/385/386//none
[301]/0/302/47342/0xdf8016a0/22207/NLEAF/387/388/[993]/none
[303]/0/304/12948/0xdf802ad8/22980/IGN/389/390/[1126]/none
[304]/0/305/6615/0xdc8149f0/22359/IGN/391/392//none
[306]/0/307/50369/0xdc815e28/25414/IGN/393/394/[429]/none
[307]/0/308/60233/0xdf805348/25579/IGN/395/396/[429]/none
[308]/0/309/47728/0xdc817260/22291/IGN/397/398//none
[310]/0/311/45365/0xdc818698/23843/IGN/399/400/[1126]/none
[313]/0/314/53023/0xdf808ff0/22309/IGN/401/402/[1126]/none
[314]/0/315/12306/0xdc81af08/25406/IGN/403/404/[429]/none
[315]/0/316/13758/0xdf80a428/22087/IGN/405/406/[1126]/none
[316]/0/317/42890/0xdc81c340/25647/IGN/407/408/[429]/none
[317]/0/318/967/0xdf80b860/22465/IGN/409/410//none
[319]/0/320/1484/0xdf80cc98/24113/IGN/411/412/[1126]/none
[323]/0/324/22398/0xdf80f508/22267/IGN/413/414//none
[324]/0/325/48541/0xdc821420/24391/IGN/415/416/[1126]/none
[327]/0/328/44022/0xdf811d78/24172/IGN/417/418/[1126]/none
[328]/0/329/29551/0xdc823c90/23626/IGN/419/420/[1126]/none
[331]/0/332/41718/0xdf8145e8/25763/IGN/421/422/[429]/none
[332]/0/333/46950/0xdc826500/24198/IGN/423/424/[1126]/none
[333]/0/334/12383/0xdf815a20/21947/LEAF/304/305//228
[335]/0/336/52495/0xdf816e58/25623/IGN/425/426/[429]/none
[336]/0/337/7686/0xdc828d70/25569/IGN/427/428/[429]/none
[337]/0/338/5517/0xdf818290/22972/SINGLE_NODE/429/430//none
[340]/0/341/63130/0xdc82b5e0/25703/IGN/431/432/[429]/none
[341]/0/342/13800/0xdf81ab00/22253/IGN/433/434/[1126]/none
[343]/0/344/40400/0xdf81bf38/25771/IGN/435/436/[429]/none
[345]/0/346/2766/0xdf81d370/22181/IGN/437/438//none
[346]/0/347/5979/0xdc82f288/25799/IGN/439/440/[429]/none
[349]/0/350/14553/0xdf81fbe0/22643/IGN/441/442//none
[351]/0/352/58307/0xdf821018/25021/IGN/443/444/[1126]/none
[354]/0/355/4873/0xdc834368/24652/IGN/445/446/[1126]/none
[355]/0/356/42316/0xdf823888/23271/IGN/447/448/[1126]/none
[356]/0/357/2005/0xdc8357a0/25811/IGN/449/450/[429]/none
[357]/0/358/30156/0xdf824cc0/25629/IGN/451/452/[429]/none
[359]/0/360/28770/0xdf8260f8/22387/IGN/453/454//none
[365]/0/366/55540/0xdf829da0/22223/SINGLE_NODE/455/456//none
[366]/0/367/9174/0xdc83bcb8/22647/IGN/457/458//none
[368]/0/369/9663/0xdc83d0f0/24929/IGN/459/460/[1126]/none
[370]/0/371/20903/0xdc83e528/25360/IGN/461/462/[429]/none
[371]/0/372/54285/0xdf82da48/25442/IGN/463/464/[429]/none
[372]/0/373/6710/0xdc83f960/23771/IGN/465/466/[1126]/none
[373]/0/374/37018/0xdf82ee80/25567/IGN/467/468/[429]/none
[375]/0/376/15229/0xdf8302b8/23456/IGN/469/470/[549]/none
[376]/0/377/58470/0xdc8421d0/25420/IGN/471/472/[429]/none
[377]/0/378/50505/0xdf8316f0/24135/IGN/473/474/[1126]/none
[380]/0/381/21465/0xdc844a40/25553/IGN/475/476/[429]/none
[383]/0/384/24601/0xdf835398/22617/IGN/477/478/[1126]/none
[384]/0/385/28104/0xdc8472b0/25777/IGN/479/480/[429]/none
[386]/0/387/64238/0xdc8486e8/25611/IGN/481/482/[429]/none
[387]/0/388/26680/0xdf837c08/22271/IGN/483/484//none
[390]/0/391/40342/0xdc84af58/25549/IGN/485/486/[429]/none
[392]/0/393/65412/0xdc84c390/25527/IGN/487/488/[429]/none
[393]/0/394/30187/0xdf83b8b0/24127/IGN/489/490/[1126]/none
[395]/0/396/42242/0xdf83cce8/22247/IGN/491/492//none
[397]/0/398/33574/0xdf83e120/23402/IGN/493/494/[549]/none
[398]/0/399/44546/0xdc850038/25729/IGN/495/496/[429]/none
[399]/0/400/33053/0xdf83f558/25372/IGN/497/498/[429]/none
[400]/0/401/62487/0xdc851470/23586/IGN/499/500/[549]/none
[402]/0/403/64374/0xdc8528a8/24585/IGN/501/502/[1126]/none
[404]/0/405/10583/0xdc853ce0/24188/IGN/503/504/[1126]/none
[406]/0/407/4624/0xdc855118/22333/IGN/505/506//none
[407]/0/408/51023/0xdf844638/25785/IGN/507/508/[429]/none
[410]/0/411/41112/0xdc857988/23670/IGN/509/510/[1126]/none
[413]/0/414/4731/0xdf8482e0/22699/LEAF/511/512//1030
[415]/0/416/42807/0xdf849718/23073/IGN/513/514/[1126]/none
[416]/0/417/6159/0xdc85b630/22417/IGN/515/516/[1126]/none
[417]/0/418/48258/0xdf84ab50/25625/IGN/517/518/[429]/none
[420]/0/421/60734/0xdc85dea0/22171/IGN/519/520/[549]/none
[422]/0/423/23571/0xdc85f2d8/22183/IGN/521/522//none
[423]/0/424/2576/0xdf84e7f8/25555/IGN/523/524/[429]/none
[425]/0/426/41714/0xdf84fc30/21979/LEAF/525/526//485
[427]/0/428/6860/0xdf851068/25621/IGN/527/528/[429]/none
[428]/0/429/4287/0xdc862f80/24245/IGN/529/530/[1126]/none
[429]/0/430/29293/0xdf8524a0/25301/LEAF/18/19//4
[430]/0/431/16152/0xdc8643b8/22085/IGN/531/532//none
[431]/0/432/62586/0xdf8538d8/23672/IGN/533/534/[549]/none
[432]/0/433/42224/0xdc8657f0/25561/IGN/535/536/[429]/none
[433]/0/434/3967/0xdf854d10/23358/IGN/537/538/[1126]/none
[434]/0/435/3206/0xdc866c28/25402/IGN/539/540/[429]/none
[436]/0/437/50814/0xdc868060/25450/IGN/541/542/[429]/none
[437]/0/438/41203/0xdf857580/22451/IGN/543/544/[549]/none
[438]/0/439/37196/0xdc869498/25341/IGN/545/546/[429]/none
[440]/0/441/24812/0xdc86a8d0/22329/IGN/547/548/[1126]/none
[441]/0/442/5053/0xdf859df0/25515/IGN/549/550/[429]/none
[442]/0/443/49748/0xdc86bd08/25511/IGN/551/552/[429]/none
[443]/0/444/9438/0xdf85b228/22319/IGN/553/554//none
[445]/0/446/34186/0xdf85c660/25557/IGN/555/556/[429]/none
[446]/0/447/46094/0xdc86e578/25585/SINGLE_NODE_NW/557/558//none
[447]/0/448/39677/0xdf85da98/25356/IGN/559/560/[429]/none
[448]/0/449/31278/0xdc86f9b0/24425/NLEAF/561/562/[1125]/none
[449]/0/450/6771/0xdf85eed0/22405/IGN/563/564/[549]/none
[450]/0/451/10200/0xdc870de8/24969/IGN/565/566/[1126]/none
[451]/0/452/43231/0xdf860308/22017/IGN/567/568//none
[453]/0/454/63527/0xdf861740/22493/IGN/569/570/[1126]/none
[455]/0/456/62090/0xdf862b78/25655/IGN/571/572/[429]/none
[456]/0/457/11522/0xdc874a90/23247/IGN/573/574/[1126]/none
[457]/0/458/40511/0xdf863fb0/25507/IGN/575/576/[429]/none
[459]/0/460/56836/0xdf8653e8/25428/IGN/577/578/[429]/none
[462]/0/463/24563/0xdc878738/23584/IGN/579/580//none
[463]/0/464/43142/0xdf867c58/23681/IGN/581/582/[1126]/none
[464]/0/465/10202/0xdc879b70/22217/IGN/583/584/[1126]/none
[465]/0/466/24277/0xdf869090/24322/IGN/585/586/[1126]/none
[466]/0/467/30867/0xdc87afa8/23464/IGN/587/588/[1126]/none
[467]/0/468/12185/0xdf86a4c8/22597/IGN/589/590//none
[468]/0/469/65261/0xdc87c3e0/23755/IGN/591/592/[1126]/none
[471]/0/472/22187/0xdf86cd38/31401/SINGLE_NODE/593/594//none
[473]/0/474/56402/0xdf86e170/23923/IGN/595/596/[1126]/none
[474]/0/475/6256/0xdc880088/23803/IGN/597/598/[1126]/none
[476]/0/477/1400/0xdc8814c0/25539/IGN/599/600/[429]/none
[477]/0/478/48776/0xdf8709e0/24117/IGN/601/602/[1126]/none
[478]/0/479/42209/0xdc8828f8/25495/IGN/603/604/[429]/none
[482]/0/483/54930/0xdc885168/25597/IGN/605/606/[429]/none
[483]/0/484/9924/0xdf874688/25765/IGN/607/608/[429]/none
[485]/0/486/38916/0xdf875ac0/25011/NLEAF/609/610/[425]/none
[486]/0/487/64182/0xdc8879d8/22285/NLEAF/611/612/[993]/none
[488]/0/489/37500/0xdc888e10/22455/IGN/613/614/[1126]/none
[490]/0/491/60366/0xdc88a248/26481/NLEAF/615/616/[1109]/none
[492]/0/493/12173/0xdc88b680/25523/IGN/617/618/[429]/none
[493]/0/494/18725/0xdf87aba0/23849/IGN/619/620//none
[494]/0/495/41411/0xdc88cab8/1501/SINGLE_NODE/621/622//none
[495]/0/496/14529/0xdf87bfd8/24961/IGN/623/624//none
[501]/0/502/34573/0xdf87fc80/22349/IGN/625/626/[1126]/none
[502]/0/503/43280/0xdc891b98/25757/IGN/627/628/[429]/none
[503]/0/504/17893/0xdf8810b8/24184/IGN/629/630//none
[504]/0/505/45648/0xdc892fd0/24704/IGN/631/632/[1126]/none
[505]/0/506/58393/0xdf8824f0/25545/IGN/633/634/[429]/none
[506]/0/507/57193/0xdc894408/25358/IGN/635/636/[429]/none
[507]/0/508/58469/0xdf883928/22431/IGN/637/638//none
[508]/0/509/37540/0xdc895840/25307/IGN/639/640/[429]/none
[509]/0/510/34695/0xdf884d60/25368/IGN/641/642/[429]/none
[511]/0/512/8793/0xdf886198/25667/IGN/643/644/[429]/none
[516]/0/517/19351/0xdc89a920/22655/IGN/645/646//none
[517]/0/518/65359/0xdf889e40/23480/IGN/647/648/[549]/none
[518]/0/519/47179/0xdc89bd58/23925/IGN/649/650/[1126]/none
[519]/0/520/7587/0xdf88b278/22195/IGN/651/652/[1126]/none
[520]/0/521/27465/0xdc89d190/25543/IGN/653/654/[429]/none
[522]/0/523/25387/0xdc89e5c8/25683/IGN/655/656/[429]/none
[525]/0/526/22891/0xdf88ef20/25705/IGN/657/658/[429]/none
[526]/0/527/44009/0xdc8a0e38/22345/IGN/659/660/[1126]/none
[528]/0/529/27389/0xdc8a2270/23474/IGN/661/662/[1126]/none
[529]/0/530/22591/0xdf891790/22361/IGN/663/664//none
[530]/0/531/3266/0xdc8a36a8/25685/IGN/665/666/[429]/none
[532]/0/533/9957/0xdc8a4ae0/22363/IGN/667/668/[1126]/none
[533]/0/534/18584/0xdf894000/23057/IGN/669/670/[1126]/none
[534]/0/535/60484/0xdc8a5f18/23026/IGN/671/672/[1126]/none
[535]/0/536/23022/0xdf895438/23559/IGN/673/674/[1126]/none
[536]/0/537/29515/0xdc8a7350/22517/IGN/675/676//none
[540]/0/541/790/0xdc8a9bc0/24429/IGN/677/678/[1126]/none
[541]/0/542/50532/0xdf8990e0/25573/IGN/679/680/[429]/none
[542]/0/543/63968/0xdc8aaff8/24423/IGN/681/682/[1126]/none
[543]/0/544/58537/0xdf89a518/25303/IGN/683/684/[429]/none
[544]/0/545/58442/0xdc8ac430/23576/IGN/685/686/[549]/none
[545]/0/546/4571/0xdf89b950/25781/IGN/687/688/[429]/none
[546]/0/547/15390/0xdc8ad868/24463/IGN/689/690/[1126]/none
[547]/0/548/43877/0xdf89cd88/25503/IGN/691/692/[429]/none
[549]/0/550/50928/0xdf89e1c0/22613/LEAF/70/71//50
[550]/0/551/2367/0xdc8b00d8/23404/IGN/693/694/[1126]/none
[551]/0/552/57593/0xdf89f5f8/22425/IGN/695/696//none
[552]/0/553/52695/0xdc8b1510/23592/IGN/697/698/[1126]/none
[553]/0/554/59366/0xdf8a0a30/25551/IGN/699/700/[429]/none
[554]/0/555/42007/0xdc8b2948/24535/IGN/701/702/[1126]/none
[555]/0/556/14080/0xdf8a1e68/24125/IGN/703/704/[1126]/none
[557]/0/558/19112/0xdf8a32a0/25595/IGN/705/706/[429]/none
[558]/0/559/29629/0xdc8b51b8/25452/IGN/707/708/[429]/none
[563]/0/564/22498/0xdf8a6f48/22583/IGN/709/710//none
[564]/0/565/47830/0xdc8b8e60/23508/IGN/711/712/[549]/none
[568]/0/569/43575/0xdc8bb6d0/25525/IGN/713/714/[429]/none
[569]/0/570/20157/0xdf8aabf0/25392/IGN/715/716/[429]/none
[571]/0/572/16616/0xdf8ac028/24951/IGN/717/718/[1126]/none
[572]/0/573/21266/0xdc8bdf40/25378/IGN/719/720/[429]/none
[573]/0/574/26283/0xdf8ad460/25519/IGN/721/722/[429]/none
[575]/0/576/52181/0xdf8ae898/25422/IGN/723/724/[429]/none
[577]/0/578/23325/0xdf8afcd0/25396/IGN/725/726/[429]/none
[578]/0/579/57294/0xdc8c1be8/25440/IGN/727/728/[429]/none
[579]/0/580/56788/0xdf8b1108/23574/IGN/729/730/[1126]/none
[581]/0/582/31176/0xdf8b2540/25823/IGN/731/732/[429]/none
[582]/0/583/12699/0xdc8c4458/22379/IGN/733/734/[549]/none
[583]/0/584/24351/0xdf8b3978/22177/IGN/735/736/[1126]/none
[584]/0/585/47609/0xdc8c5890/25719/IGN/737/738/[429]/none
[586]/0/587/33247/0xdc8c6cc8/22635/IGN/739/740//none
[587]/0/588/65244/0xdf8b61e8/22691/SINGLE_NODE/741/742//none
[588]/0/589/15851/0xdc8c8100/25747/IGN/743/744/[429]/none
[589]/0/590/21695/0xdf8b7620/25615/IGN/745/746/[429]/none
[590]/0/591/25236/0xdc8c9538/24233/IGN/747/748/[1126]/none
[591]/0/592/2527/0xdf8b8a58/23523/IGN/749/750/[549]/none
[592]/0/593/30632/0xdc8ca970/22459/IGN/751/752//none
[593]/0/594/38855/0xdf8b9e90/24296/IGN/753/754/[1126]/none
[594]/0/595/27310/0xdc8cbda8/25412/IGN/755/756/[429]/none
[596]/0/597/33147/0xdc8cd1e0/25529/IGN/757/758/[429]/none
[598]/0/599/35259/0xdc8ce618/23765/IGN/759/760/[1126]/none
[599]/0/600/58734/0xdf8bdb38/25410/IGN/761/762/[429]/none
[601]/0/602/48180/0xdf8bef70/23374/IGN/763/764/[1126]/none
[602]/0/603/1594/0xdc8d0e88/23149/SINGLE_NODE/765/766//none
[604]/0/605/37933/0xdc8d22c0/25727/IGN/767/768/[429]/none
[605]/0/606/34721/0xdf8c17e0/25593/IGN/769/770/[429]/none
[606]/0/607/32545/0xdc8d36f8/25661/IGN/771/772/[429]/none
[607]/0/608/11008/0xdf8c2c18/23845/IGN/773/774/[549]/none
[608]/0/609/32175/0xdc8d4b30/23067/IGN/775/776/[1126]/none
[609]/0/610/63937/0xdf8c4050/23679/IGN/777/778//none
[610]/0/611/39737/0xdc8d5f68/22355/IGN/779/780//none
[612]/0/613/53286/0xdc8d73a0/23472/IGN/781/782/[1126]/none
[619]/0/620/62657/0xdf8ca568/25483/IGN/783/784/[429]/none
[620]/0/621/38959/0xdc8dc480/24215/IGN/785/786/[1126]/none
[621]/0/622/43399/0xdf8cb9a0/25789/IGN/787/788/[429]/none
[624]/0/625/30691/0xdc8decf0/23875/IGN/789/790/[549]/none
[625]/0/626/62481/0xdf8ce210/23468/IGN/791/792/[1126]/none
[626]/0/627/15540/0xdc8e0128/24696/IGN/793/794/[1126]/none
[627]/0/628/45341/0xdf8cf648/25735/IGN/795/796/[429]/none
[629]/0/630/44001/0xdf8d0a80/22357/IGN/797/798/[1126]/none
[630]/0/631/61741/0xdc8e2998/25472/IGN/799/800/[429]/none
[632]/0/633/1602/0xdc8e3dd0/25603/IGN/801/802/[429]/none
[633]/0/634/39418/0xdf8d32f0/22205/IGN/803/804//none
[635]/0/636/35135/0xdf8d4728/23758/IGN/805/806/[1126]/none
[636]/0/637/53626/0xdc8e6640/25773/IGN/807/808/[429]/none
[637]/0/638/18916/0xdf8d5b60/22937/IGN/809/810/[1126]/none
[640]/0/641/40678/0xdc8e8eb0/22453/IGN/811/812//none
[641]/0/642/18143/0xdf8d83d0/23582/IGN/813/814/[1126]/none
[642]/0/643/32220/0xdc8ea2e8/22523/IGN/815/816/[1126]/none
[644]/0/645/40645/0xdc8eb720/22337/IGN/817/818/[1126]/none
[645]/0/646/18042/0xdf8dac40/22323/IGN/819/820//none
[646]/0/647/58579/0xdc8ecb58/23235/IGN/821/822/[1126]/none
[648]/0/649/29918/0xdc8edf90/22645/IGN/823/824/[1126]/none
[649]/0/650/40129/0xdf8dd4b0/25663/IGN/825/826/[429]/none
[650]/0/651/3205/0xdc8ef3c8/25426/IGN/827/828/[429]/none
[651]/0/652/58160/0xdf8de8e8/25671/IGN/829/830/[429]/none
[652]/0/653/42901/0xdc8f0800/22615/IGN/831/832//none
[654]/0/655/7732/0xdc8f1c38/25769/IGN/833/834/[429]/none
[656]/0/657/31700/0xdc8f3070/25448/IGN/835/836/[429]/none
[657]/0/658/19437/0xdf8e2590/25605/IGN/837/838/[429]/none
[661]/0/662/7463/0xdf8e4e00/22239/IGN/839/840//none
[663]/0/664/14145/0xdf8e6238/25591/IGN/841/842/[429]/none
[664]/0/665/59571/0xdc8f8150/22777/IGN/843/844/[1126]/none
[665]/0/666/20531/0xdf8e7670/25140/IGN/845/846/[1126]/none
[666]/0/667/39877/0xdc8f9588/22377/IGN/847/848/[1126]/none
[668]/0/669/9961/0xdc8fa9c0/25408/IGN/849/850/[429]/none
[670]/0/671/37236/0xdc8fbdf8/22385/IGN/851/852/[1126]/none
[671]/0/672/26355/0xdf8eb318/25795/IGN/853/854/[429]/none
[672]/0/673/24062/0xdc8fd230/25382/IGN/855/856/[429]/none
[673]/0/674/60469/0xdf8ec750/23763/IGN/857/858/[1126]/none
[674]/0/675/18144/0xdc8fe668/24754/IGN/859/860/[1126]/none
[676]/0/677/41976/0xdc8ffaa0/3473/IGN/861/862/[1126]/none
[677]/0/678/38276/0xdf8eefc0/22371/IGN/863/864//none
[678]/0/679/13010/0xdc900ed8/22469/IGN/865/866//none
[680]/0/681/16959/0xdc902310/23905/IGN/867/868/[549]/none
[682]/0/683/30989/0xdc903748/23069/IGN/869/870/[1126]/none
[683]/0/684/56317/0xdf8f2c68/24461/IGN/871/872/[1126]/none
[685]/0/686/65266/0xdf8f40a0/22213/IGN/873/874//none
[686]/0/687/4053/0xdc905fb8/24625/IGN/875/876/[1126]/none
[688]/0/689/20206/0xdc9073f0/21955/IGN/877/878/[1126]/none
[689]/0/690/18230/0xdf8f6910/24251/IGN/879/880//none
[691]/0/692/60201/0xdf8f7d48/23674/IGN/881/882/[1126]/none
[692]/0/693/27191/0xdc909c60/22627/IGN/883/884//none
[693]/0/694/10093/0xdf8f9180/24889/IGN/885/886/[1126]/none
[695]/0/696/36166/0xdf8fa5b8/24211/IGN/887/888/[1126]/none
[696]/0/697/19706/0xdc90c4d0/23307/IGN/889/890/[1126]/none
[698]/0/699/24174/0xdc90d908/23767/IGN/891/892//none
[701]/0/702/18231/0xdf8fe260/25444/IGN/893/894/[429]/none
[705]/0/706/6864/0xdf900ad0/23394/IGN/895/896/[1126]/none
[708]/0/709/46956/0xdc913e20/25234/IGN/897/898/[1126]/none
[709]/0/710/16404/0xdf903340/23636/IGN/899/900/[1126]/none
[710]/0/711/61105/0xdc915258/25335/IGN/901/902/[429]/none
[711]/0/712/59433/0xdf904778/23578/IGN/903/904/[1126]/none
[714]/0/715/33216/0xdc917ac8/25701/IGN/905/906/[429]/none
[716]/0/717/54780/0xdc918f00/23337/SINGLE_NODE/907/908//none
[717]/0/718/38356/0xdf908420/25374/IGN/909/910/[429]/none
[718]/0/719/19438/0xdc91a338/25665/IGN/911/912/[429]/none
[719]/0/720/17453/0xdf909858/24279/IGN/913/914//none
[721]/0/722/56764/0xdf90ac90/25689/IGN/915/916/[429]/none
[723]/0/724/38932/0xdf90c0c8/25673/IGN/917/918/[429]/none
Similar Messages
-
Log file sync top event during performance test -av 36ms
Hi,
During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 208,327 7,406 36 46.6 Commit
direct path write 646,833 3,604 6 22.7 User I/O
DB CPU 1,599 10.1
direct path read temp 1,321,596 619 0 3.9 User I/O
log buffer space 4,161 558 134 3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
I am not able to figure out why "log file sync" is having such slow response.
Below is the snapshot from the load profile.
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108127 16-May-13 20:15:22 105 6.5
End Snap: 108140 16-May-13 23:30:29 156 8.9
Elapsed: 195.11 (mins)
DB Time: 265.09 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,136M Std Block Size: 8K
Shared Pool Size: 1,120M 1,168M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 1.4 0.1 0.02 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 607,512.1 33,092.1
Logical reads: 3,900.4 212.5
Block changes: 1,381.4 75.3
Physical reads: 134.5 7.3
Physical writes: 134.0 7.3
User calls: 145.5 7.9
Parses: 24.6 1.3
Hard parses: 7.9 0.4
W/A MB processed: 915,418.7 49,864.2
Logons: 0.1 0.0
Executes: 85.2 4.6
Rollbacks: 0.0 0.0
Transactions: 18.4Some of the top background wait events:
^LBackground Wait Events DB/Inst: Snaps: 108127-108140
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 208,563 0 2,528 12 1.0 66.4
db file parallel write 4,264 0 785 184 0.0 20.6
Backup: sbtbackup 1 0 516 516177 0.0 13.6
control file parallel writ 4,436 0 97 22 0.0 2.6
log file sequential read 6,922 0 95 14 0.0 2.5
Log archive I/O 6,820 0 48 7 0.0 1.3
os thread startup 432 0 26 60 0.0 .7
Backup: sbtclose2 1 0 10 10094 0.0 .3
db file sequential read 2,585 0 8 3 0.0 .2
db file single write 560 0 3 6 0.0 .1
log file sync 28 0 1 53 0.0 .0
control file sequential re 36,326 0 1 0 0.2 .0
log file switch completion 4 0 1 207 0.0 .0
buffer busy waits 5 0 1 116 0.0 .0
LGWR wait for redo copy 924 0 1 1 0.0 .0
log file single write 56 0 1 9 0.0 .0
Backup: sbtinfo2 1 0 1 500 0.0 .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
{code}
Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
DB time: 0.78 1.36 74.36 0.02 0.07 250.00
CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
Parses: 7.28 24.55 237.23 0.19 1.34 605.26
Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
Transactions: 37.99 18.36 -51.67
First Second Diff
1st 2nd
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
(ms) %DB time
SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
35.6 46.57
CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
5.6 22.66
log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
12.1 15.90
log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
N/A 10.06
SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
84.0 4.93
-direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
0.0 1.76
-db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
0.0 0.41
{code}
*To sum it sup:
1. Why is the IO response getting such an hit during the new perf test? Please suggest*
2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
{code}
select *from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for HPUX: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
{code}
Please let me know if you would like to see any other stats.
Edited by: Kunwar on May 18, 2013 2:20 PM1. A snapshot interval of 3 hours always generates meaningless results
Below are some details from the 1 hour interval AWR report.
Platform CPUs Cores Sockets Memory(GB)
HP-UX IA (64-bit) 4 4 3 31.95
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108129 16-May-13 20:45:32 140 8.0
End Snap: 108133 16-May-13 21:45:53 150 8.8
Elapsed: 60.35 (mins)
DB Time: 140.49 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,168M Std Block Size: 8K
Shared Pool Size: 1,120M 1,120M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 2.3 0.1 0.03 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 719,553.5 34,374.6
Logical reads: 4,017.4 191.9
Block changes: 1,521.1 72.7
Physical reads: 136.9 6.5
Physical writes: 158.3 7.6
User calls: 167.0 8.0
Parses: 25.8 1.2
Hard parses: 8.9 0.4
W/A MB processed: 406,220.0 19,406.0
Logons: 0.1 0.0
Executes: 88.4 4.2
Rollbacks: 0.0 0.0
Transactions: 20.9
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 73,761 6,740 91 80.0 Commit
log buffer space 3,581 541 151 6.4 Configurat
DB CPU 348 4.1
direct path write 238,962 241 1 2.9 User I/O
direct path read temp 487,874 174 0 2.1 User I/O
Background Wait Events DB/Inst: Snaps: 108129-108133
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 61,049 0 1,891 31 0.8 87.8
db file parallel write 1,590 0 251 158 0.0 11.6
control file parallel writ 1,372 0 56 41 0.0 2.6
log file sequential read 2,473 0 50 20 0.0 2.3
Log archive I/O 2,436 0 20 8 0.0 .9
os thread startup 135 0 8 60 0.0 .4
db file sequential read 668 0 4 6 0.0 .2
db file single write 200 0 2 9 0.0 .1
log file sync 8 0 1 152 0.0 .1
log file single write 20 0 0 21 0.0 .0
control file sequential re 11,218 0 0 0 0.1 .0
buffer busy waits 2 0 0 161 0.0 .0
direct path write 6 0 0 37 0.0 .0
LGWR wait for redo copy 380 0 0 0 0.0 .0
log buffer space 1 0 0 89 0.0 .0
latch: cache buffers lru c 3 0 0 1 0.0 .0 2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
We don't know anything about your online redo log configuration
Below is my redo log configuration.
GROUP# STATUS TYPE MEMBER IS_
1 ONLINE /oradata/fs01/PERFDB1/redo_1a.log NO
1 ONLINE /oradata/fs02/PERFDB1/redo_1b.log NO
2 ONLINE /oradata/fs01/PERFDB1/redo_2a.log NO
2 ONLINE /oradata/fs02/PERFDB1/redo_2b.log NO
3 ONLINE /oradata/fs01/PERFDB1/redo_3a.log NO
3 ONLINE /oradata/fs02/PERFDB1/redo_3b.log NO
6 rows selected.
04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
04:13:26 perf_monitor@PERFDB1> select *from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
1 1 40689 524288000 2 YES INACTIVE 13026185905545 18-MAY-13 01:00
2 1 40690 524288000 2 YES INACTIVE 13026185931010 18-MAY-13 03:32
3 1 40691 524288000 2 NO CURRENT 13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM -
Wait Events "log file parallel write" / "log file sync" during CREATE INDEX
Hello guys,
at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
To get some performance values, that i can compare i just built up a normal oracle database in the first step.
Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
And now take a look at these values from the AWR
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 10,019 .0 132 13 33.5
log file sync 293 .7 4 15 1.0
......How can this be possible?
Regarding to the documentation
-> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
Do you have any idea how these values come about?
Any thoughts/ideas are welcome.
Thanks and RegardsSurachart Opun (HunterX) wrote:
Thank you for Nice Idea.
In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
Two points on nologging, though:
<ul>
it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
</ul>
Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
45 min long session of log file sync waits between 5000 and 20000 ms
45 min long log file sync waits between 5000 and 20000 ms
Encountering a rather unusual performance issue. Once every 4 hours I am seeing a 45 minute long log file sync wait event being reported using Spotlight on Oracle. For the first 30 minutes the event wait is for approx 5000 ms, followed by an increase to around 20000 ms for the next 15 min before rapidly dropping off and normal operation continues for the next 3 hours and 15 minutes before the cycle repeats itself. The issue appears to maintain it's schedule independently of restarting the database. Statspack reports do not show an increase in commits or executions or any new sql running during the time the issue is occuring. We have two production environments both running identicle applications with similar usage and we do not see the issue on the other system. I am leaning towards this being a hardware issue, but the 4 hour interval regardless of load on the database has me baffled. If it were a disk or controller cache issue one would expect to see the interval change with database load.
I cycle my redo logs and archive them just fine with log file switches every 15-20 minutes. Even during this unusally long and high session of log file sync waits I can see that the redo log files are still switching and are being archived.
The redo logs are on a RAID 10, we have 4 redo logs at 1 GB each.
I've run statspack reports on hourly intervals around this event:
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
log file sync 756,729 2,538,034 88.47
db file sequential read 208,851 153,276 5.34
log file parallel write 636,648 129,981 4.53
enqueue 810 21,423 .75
log file sequential read 65,540 14,480 .50
And here is a sample while not encountering the issue:
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
log file sync 953,037 195,513 53.43
log file parallel write 875,783 83,119 22.72
db file sequential read 221,815 63,944 17.48
log file sequential read 98,310 18,848 5.15
db file scattered read 67,584 2,427 .66
Yes I know I am already tight on I/O for my redo even during normal operations yet, my redo and archiving works just fine for 3 hours and 15 minutes (11 to 15 log file switches). These normal switches result in a log file sync wait of about 5000 ms for about 45 seconds while the 1GB redo log is being written and then archived.
I welcome any and all feedback.
Message was edited by:
acyoung1
Message was edited by:
acyoung1Lee,
log_buffer = 1048576 we use a standard of 1 MB for our buffer cache, we've not altered the setting. It is my understanding that Oracle typically recommends that you not exceed 1MB for the log_buffer, stating that a larger buffer normally does not increase performance.
I would agree that tuning the log_buffer parameter may be a place to consider; however, this issue last for ~45 minutes once every 4 hours regardless of database load. So for 3 hours and 15 minutes during both peak usage and low usage the buffer cache, redo log and archival processes run just fine.
A bit more information from statspack reports:
Here is a sample while the issue is occuring.
Snap Id Snap Time Sessions
Begin Snap: 661 24-Mar-06 12:45:08 87
End Snap: 671 24-Mar-06 13:41:29 87
Elapsed: 56.35 (mins)
Cache Sizes
~~~~~~~~~~~
db_block_buffers: 196608 log_buffer: 1048576
db_block_size: 8192 shared_pool_size: 67108864
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 615,141.44 2,780.83
Logical reads: 13,241.59 59.86
Block changes: 2,255.51 10.20
Physical reads: 144.56 0.65
Physical writes: 61.56 0.28
User calls: 1,318.50 5.96
Parses: 210.25 0.95
Hard parses: 8.31 0.04
Sorts: 16.97 0.08
Logons: 0.14 0.00
Executes: 574.32 2.60
Transactions: 221.21
% Blocks changed per Read: 17.03 Recursive Call %: 26.09
Rollback per transaction %: 0.03 Rows per Sort: 46.87
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 98.91 In-memory Sort %: 100.00
Library Hit %: 98.89 Soft Parse %: 96.05
Execute to Parse %: 63.39 Latch Hit %: 99.87
Parse CPU to Parse Elapsd %: 90.05 % Non-Parse CPU: 85.05
Shared Pool Statistics Begin End
Memory Usage %: 89.96 92.20
% SQL with executions>1: 76.39 67.76
% Memory for SQL w/exec>1: 72.53 63.71
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
log file sync 756,729 2,538,034 88.47
db file sequential read 208,851 153,276 5.34
log file parallel write 636,648 129,981 4.53
enqueue 810 21,423 .75
log file sequential read 65,540 14,480 .50
And this is a sample during "normal" operation.
Snap Id Snap Time Sessions
Begin Snap: 671 24-Mar-06 13:41:29 88
End Snap: 681 24-Mar-06 14:42:57 88
Elapsed: 61.47 (mins)
Cache Sizes
~~~~~~~~~~~
db_block_buffers: 196608 log_buffer: 1048576
db_block_size: 8192 shared_pool_size: 67108864
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 716,776.44 2,787.81
Logical reads: 13,154.06 51.16
Block changes: 2,627.16 10.22
Physical reads: 129.47 0.50
Physical writes: 67.97 0.26
User calls: 1,493.74 5.81
Parses: 243.45 0.95
Hard parses: 9.23 0.04
Sorts: 18.27 0.07
Logons: 0.16 0.00
Executes: 664.05 2.58
Transactions: 257.11
% Blocks changed per Read: 19.97 Recursive Call %: 25.87
Rollback per transaction %: 0.02 Rows per Sort: 46.85
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 99.02 In-memory Sort %: 100.00
Library Hit %: 98.95 Soft Parse %: 96.21
Execute to Parse %: 63.34 Latch Hit %: 99.90
Parse CPU to Parse Elapsd %: 96.60 % Non-Parse CPU: 84.06
Shared Pool Statistics Begin End
Memory Usage %: 92.20 88.73
% SQL with executions>1: 67.76 75.40
% Memory for SQL w/exec>1: 63.71 68.28
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
log file sync 953,037 195,513 53.43
log file parallel write 875,783 83,119 22.72
db file sequential read 221,815 63,944 17.48
log file sequential read 98,310 18,848 5.15
db file scattered read 67,584 2,427 .66 -
Hi all,
We are using Oracle 9.2.0.4 on SUSE Linux 10. In Our statspack report one of the Top timed event is log file sysnc we are getting.We are not using any storage.IS this a bug of 9.2.0.4 or what is the solution of it
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
ai 1495142514 ai 1 9.2.0.4.0 NO ai-oracle
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 241 03-Sep-09 12:17:17 255 63.2
End Snap: 242 03-Sep-09 12:48:50 257 63.4
Elapsed: 31.55 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 1,280M Std Block Size: 8K
Shared Pool Size: 160M Log Buffer: 1,024K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 7,881.17 8,673.87
Logical reads: 14,016.10 15,425.86
Block changes: 44.55 49.04
Physical reads: 3,421.71 3,765.87
Physical writes: 8.97 9.88
User calls: 254.50 280.10
Parses: 27.08 29.81
Hard parses: 0.46 0.50
Sorts: 8.54 9.40
Logons: 0.12 0.13
Executes: 139.47 153.50
Transactions: 0.91
% Blocks changed per Read: 0.32 Recursive Call %: 42.75
Rollback per transaction %: 13.66 Rows per Sort: 120.84
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 75.59 In-memory Sort %: 99.99
Library Hit %: 99.55 Soft Parse %: 98.31
Execute to Parse %: 80.58 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 67.17 % Non-Parse CPU: 99.10
Shared Pool Statistics Begin End
Memory Usage %: 95.32 96.78
% SQL with executions>1: 74.91 74.37
% Memory for SQL w/exec>1: 68.59 69.14
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
log file sync 11,558 10,488 67.52
db file sequential read 611,828 3,214 20.69
control file parallel write 436 541 3.48
buffer busy waits 626 522 3.36
CPU time 395 2.54
^LWait Events for DB: ai Instance: ai Snaps: 241 -242
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file sync 11,558 9,981 10,488 907 6.7
db file sequential read 611,828 0 3,214 5 355.7
control file parallel write 436 0 541 1241 0.3
buffer busy waits 626 518 522 834 0.4
control file sequential read 661 0 159 241 0.4
BFILE read 734 0 110 151 0.4
db file scattered read 595,462 0 81 0 346.2
enqueue 15 5 19 1266 0.0
latch free 109 22 1 8 0.1
db file parallel read 102 0 1 6 0.1
log file parallel write 1,498 1,497 1 0 0.9
BFILE get length 166 0 0 3 0.1
SQL*Net break/reset to clien 199 0 0 1 0.1
SQL*Net more data to client 5,139 0 0 0 3.0
BFILE open 76 0 0 0 0.0
row cache lock 5 0 0 0 0.0
BFILE internal seek 734 0 0 0 0.4
BFILE closure 76 0 0 0 0.0
db file parallel write 173 0 0 0 0.1
direct path read 18 0 0 0 0.0
direct path write 4 0 0 0 0.0
SQL*Net message from client 480,888 0 284,247 591 279.6
virtual circuit status 64 64 1,861 29072 0.0
wakeup time manager 59 59 1,757 29781 0.0Your elapsed time is roughly 2000 seconds (31:55 rounded up) - and your log file sync time is roughly 10,000 - which is 5 seconds per second for the duration. Alternatively your session count is roughly 250 at start and end of snapshot - so if we assume that the number of sessions was steady for the duration, every session has suffered 40 seconds of log file sync in the interval. You've recorded roughly 1,500 transactions in the interval (0.91 per second, of which about 13% were rollbacks) - so your log file sync time has averaged more than 6.5 seconds per commit.
Whichever way you look at it, this suggests that either the log file sync figures are wrong, or you have had a temporary hardware failure. Given that you've had a few buffer busy waits and control file write waits of about 900 m/s each, the hardware failure seems likely.
Check log file parallel write times to see if this helps to confirm the hypothesis. (Unfortunately some platforms don't report liog file parallel wriite times correctly for earlier versions of 9.2 - so this may not help.)
You also have 15 enqueue waits averaging 1.2 seconds - check the enqueue stats section of the report to see which enqueue this was: if it was (e.g. CF - control file) then this also helps to confirm the hardware hypothesis.
It's possible that you had a couple of hardware resets or something of that sort in the interval that stopped your system quite dramatically for a minute or two.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Hi all, I am using Oracle 10gR2 on Solaris 10.
It did a SQL Trace and came up with the following resultMisses in library cache during parse: 591
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 241 0.06 0.61
KJC: Wait for msg sends to complete 2 0.00 0.00
SQL*Net message to client 1768 0.00 0.00
SQL*Net message from client 1768 0.14 7.94
row cache lock 7 0.00 0.00
gc cr grant 2-way 1 0.00 0.00
db file sequential read 67 0.87 6.73
gc current grant 2-way 19 0.00 0.01
gc current grant busy 58 0.01 0.08
log file sync 3055 0.98 2592.00
gc current block 2-way 14 0.00 0.02
gc cr block 2-way 77 0.00 0.06
log file switch completion 12 0.98 8.80
gc current request 5 1.23 6.15
gc current block lost 1 0.45 0.45
lock deadlock retry 1 0.00 0.00
latch free 1 0.00 0.00
enq: TM - contention 1 0.00 0.00
gc cr request 5 1.23 6.14
gc cr block lost 1 0.31 0.31
cr request retry 1 0.00 0.00
latch: session allocation 1 0.00 0.00
gc buffer busy 2 0.98 1.96
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 237 0.08 0.05 0 0 38 0
Execute 2184 0.51 6.17 1 200 585 364
Fetch 1884 0.18 6.96 27 3234 195 2127
total 4305 0.77 13.19 28 3434 818 2491
Misses in library cache during parse: 21
Misses in library cache during execute: 19
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 21 0.00 0.01
row cache lock 248 0.01 0.08
gc cr grant 2-way 3 0.00 0.00
db file sequential read 28 1.01 3.22
gc current grant busy 8 0.00 0.00
gc current block 2-way 5 0.00 0.00
gc cr block 2-way 1 0.00 0.00
log file switch completion 4 0.98 3.55
gc current request 1 1.22 1.22
latch: KCL gc element parent latch 1 0.00 0.00
latch: redo allocation 1 0.00 0.00
gc current block busy 1 0.64 0.64
1181 user SQL statements in session.
314 internal SQL statements in session.
1495 SQL statements in session.There is a lot of log file sync waits. There were a lot of INSERTS in the sql but I did not find any commits. For example
INSERT INTO CM_WORKFLOW_AUDIT (AUDIT_TRAIL_ID, CASE_HISTORY_ID,USER_ID,
GROUP_ID,ACTION_ID,DESCRIPTION,DATE_TIME)
VALUES
('080504001809',2154515,19,2,23,'Ticket[2157817] added to Super Ticket',
TO_DATE('04-05-2008 13:14:38','dd-mm-yyyy HH24:MI:SS'))
But there is no commit at the end, there are a lot of INSERTS like this one but no commit at the end of it. So log file sync cant be waiting to flush the buffer into a redolog (well, that is what I think atleast). Can some one please tell me what is causing the log files sync wait? By the way my log_buffer is 12mb.
Regards.....Hi,
The number of commits can be retrieved from v$sysstat and v$sesstat.
There are no commit statements in any trace file,
you need to look at lines starting with XCTEND in the raw trace data.
Also the size of the log_buffer has 0 to do with the log file sync event, and setting big log_buffer is the typical 'more is better' tuning which doesn't help.
You need to investigate the speed of the devices with holding online redolog, do NOT locate online redologs on RAID-5 devices.
Also posting 'Any one ??' when you are not getting response immediately shows you don't seem to understand this is a volunteer forum.
Sybrand Bakker
Senior Oracle DBA -
Hig Log file sync waits on DG environment.
Hi All,
Experiencing a high number of "log file sync" waits on primary after AIX OS upgrade to 6.1 and
changed storage from EMC DMX storage to EMC VMAX storage on primary. SA's says EMC checked out the
storage and it is showing faster response time than the old DMX storage.
Not made any app changes and have been running fine for years on 10.2.0.3 just befor upgrade.
Research:
Every time primary database has to write redo from the log buffer to the online redolog files, the user session
waits on "log file sync" wait event while waiting for LGWR to post it back to confirm all redo changes are safely
on disk, however when the primary database also has a standby DB and the log shipping is using "LGWR SYNC AFFIRM"
means that user sessions not only has to wait for the local write to the online redologs, but also wait for the
write to the SRL on the standby DB, so every delay in getting a complete write response from the standby will
be seen in the primary as an "log file sync" wait event even if the local write to the ORL has completed already
After AWRs from primary(DB1ABRN_regy_AWR_20110802_1400_3.html)thre is not much to pinpoint except:
i) DG is configured to use 'LGWR SYNC AFFIRM'(customer can't switch to standby "ARCH SYNC NOAFFIRM" for critical application support).
ii) Upgraded the OS to 6.1 and changed their storage on primary *** Standby DB storage is still using slow one.
In DG there are several components like transport network, IO on the standby etc that can feed back to primary "log file sync" wait events that are seen.
Question we have is:
What is the best way to trace/query information from the standby side to identify its contribution to the high log file synch wait on the primary ?There an Oracle support note on this :
WAITEVENT: "log file sync" Reference Note [ID 34592.1]
While it does not address trace it does have a Data Guard section.
Also this Oracle support note may help :
Troubleshooting I/O-related waits [ID 223117.1]
Best Regards
mseberg -
Performance Issue: Wait event "log file sync" and "Execute to Parse %"
In one of our test environments users are complaining about slow response.
In statspack report folowing are the top-5 wait events
Event Waits Time (cs) Wt Time
log file parallel write 1,046 988 37.71
log file sync 775 774 29.54
db file scattered read 4,946 248 9.47
db file parallel write 66 248 9.47
control file parallel write 188 152 5.80
And after runing the same application 4 times, we are geting Execute to Parse % = 0.10. Cursor sharing is forced and query rewrite is enabled
When I view v$sql, following command is parsed frequently
EXECUTIONS PARSE_CALLS
SQL_TEXT
93380 93380
select SEQ_ORDO_PRC.nextval from DUAL
Please suggest what should be the method to troubleshoot this and if I need to check some more information
Regards,
Sudhanshu BhandariWell, of course, you probably can't eliminate this sort of thing entirely: a setup such as yours is inevitably a compromise. What you can do is make sure your log buffer is a good size (say 10MB or so); that your redo logs are large (at least 100MB each, and preferably large enough to hold one hour or so of redo produced at the busiest time for your database without filling up); and finally set ARCHIVE_LAG_TARGET to something like 1800 seconds or more to ensure a regular, routine, predictable log switch.
It won't cure every ill, but that sort of setup often means the redo subsystem ceases to be a regular driver of foreground waits. -
Statspack: High log file sync timeouts and waits
Hi all,
Please see an extract from our statpack report:
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
log file sync 349,713 215,674 74.13
db file sequential read 16,955,622 31,342 10.77
CPU time 21,787 7.49
direct path read (lob) 92,762 8,910 3.06
db file scattered read 4,335,034 4,439 1.53
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file sync 349,713 150,785 215,674 617 1.8
db file sequential read 16,955,622 0 31,342 2 85.9
I hope the above is readable. I'm concerned with the very high number of Waits and Timeouts, particulary around the log file sync event. From reading around I suspect that the disk our redo log sits on isn't fast enough.
1) Is this conclusion correct, are these timeouts excessively high (70% seems high...)?
2) I see high waits on almost every other event (but not timeouts), is this pointing towards an incorrect database database setup (give our very high loads of 160 executes second?
Any help would be much appreciated.
JonathanTop 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
log file sync 349,713 215,674 74.13
db file sequential read 16,955,622 31,342 10.77
CPU time 21,787 7.49
direct path read (lob) 92,762 8,910 3.06
db file scattered read 4,335,034 4,439 1.53
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file sync 349,713 150,785 215,674 617 1.8
db file sequential read 16,955,622 0 31,342 2 85.9What's the time frame of this report on?
It looks like your disk storage can't keep up with the volume of I/O requests from your database.
The first few thing need to look at, what're IO intensive SQLs in your database. Are these SQLs doing unnecessary full table scan?
Find out the hot blocks and the objects they belong.
Check v$session_wait view.
Is there any other suspicious activity going on in your Server ? Like other program other than Oracle doing high IO activities? Are there any core dump going on? -
Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
The question I have relates to the stated breakdown of 'log file sync' wait event:
1. Wakeup LGWR if idle
2. LGWR gathers the redo to be written and issue the I/O
3. Time for the log write I/O to complete
4. LGWR I/O post processing
5. LGWR posting the foreground/user session that the write has completed
6. Foreground/user session wakeup
Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
Clearly, my analsys contains a lot of conjecture, hence this note.
Can anybody point me in the direction of some facts?Tony Hasler wrote:
Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
The question I have relates to the stated breakdown of 'log file sync' wait event:
1. Wakeup LGWR if idle
2. LGWR gathers the redo to be written and issue the I/O
3. Time for the log write I/O to complete
4. LGWR I/O post processing
5. LGWR posting the foreground/user session that the write has completed
6. Foreground/user session wakeup
Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
Clearly, my analsys contains a lot of conjecture, hence this note.
Can anybody point me in the direction of some facts?It depends on what you mean by facts - presumably only the people who wrote the code know what really happens, the rest of us have to guess.
You're right about point 1 in the MOS note: it should include "or wait for current lgwr write and posts to complete".
This means, of course, that your session could see its "log file sync" taking twice the "redo write time" because it posted lgwr just after lgwr has started to write - so you have to wait two write and post cycles. Generally the statistical effects will reduce this extreme case.
You've been pointed to the two best bits of advice on the internet: As Kevin points out, if you have lgwr posting a lot of processes in one go it may stall as they wake up, so the batch of waiting processes has to wait extra time; and as Riyaj points out - there's always dtrace (et al.) if you want to see what's really happening. (Tanel has some similar notes, I think, on LFS).
If you're stuck with Oracle diagnostics only then:
redo size / redo synch writes for sessions will tell you the typical "commit size"
redo size + redo wastage / redo writes for lgwr will tell you the typical redo write size
If you have a significant number of small processes "commit sizes" per write (more than CPU count, say) then you may be looking at Kevin's storm.
Watch out for a small number of sessions with large commit sizes running in parallel with a large number of sessions with small commit sizes - this could make all the "small" processes run at the speed of the "large" processes.
It's always worth looking at the event histogram for the critical wait events to see if their patterns offer any insights.
Regards
Jonathan Lewis -
10.2.0.2 aix 5.3 64bit archivelog mode.
I'm going to attempt to describe the system first and then outline the issue: The database is about 1Gb in size of which only about 400Mb is application data. There is only one table in the schema that is very active with all transactions inserting and or updating a row to log the user activity. The rest of the tables are used primarily for reads by the users and periodically updated by the application administrator with application code. There's about 1.2G of archive logs generated per day, from 3 50Mb redo logs all on the same filesystem.
The problem: We randomly have issues with users being kicked out of the application or hung up for a period of time. This application is used at a remote site and many times we can attribute the users issues to network delays or problems with a terminal server they are logging into. Today however they called and I noticed an abnormally high amount of 'log file sync' waits.
I asked the application admin if there could have been more activity during that time frame and more frequent commits than normal, but he says there was not. My next thought was that there might be an issue with the IO sub-system that the logs are on. So I went to our aix admin to find out the activity of that file system during that time frame. She had an nmon report generated that shows the RAID-1 disk group peak activity during that time was only 10%.
Now I took two awr reports and compared some of the metrics to see if indeed there was the same amount of activity, and it does look like the load was the same. With the same amount of activity & commits during both time periods wouldn't that lead to it being time spent waiting on writes to the disk that the redo logs are on? If so, why wouldn't the nmon report show a higher percentage of disk activity?
I can provide more values from the awr reports if needed.
per sec per trx
Redo size: 31,226.81 2,334.25
Logical reads: 646.11 48.30
Block changes: 190.80 14.26
Physical reads: 0.65 0.05
Physical writes: 3.19 0.24
User calls: 69.61 5.20
Parses: 34.34 2.57
Hard parses: 19.45 1.45
Sorts: 14.36 1.07
Logons: 0.01 0.00
Executes: 36.49 2.73
Transactions: 13.38
Redo size: 33,639.71 2,347.93
Logical reads: 697.58 48.69
Block changes: 215.83 15.06
Physical reads: 0.86 0.06
Physical writes: 3.26 0.23
User calls: 71.06 4.96
Parses: 36.78 2.57
Hard parses: 21.03 1.47
Sorts: 15.85 1.11
Logons: 0.01 0.00
Executes: 39.53 2.76
Transactions: 14.33
Total Per sec Per Trx
redo blocks written 252,046 70.52 5.27
redo buffer allocation retries 7 0.00 0.00
redo entries 167,349 46.82 3.50
redo log space requests 7 0.00 0.00
redo log space wait time 49 0.01 0.00
redo ordering marks 2,765 0.77 0.06
redo size 111,612,156 31,226.81 2,334.25
redo subscn max counts 5,443 1.52 0.11
redo synch time 47,910 13.40 1.00
redo synch writes 64,433 18.03 1.35
redo wastage 13,535,756 3,787.03 283.09
redo write time 27,642 7.73 0.58
redo writer latching time 2 0.00 0.00
redo writes 48,507 13.57 1.01
user commits 47,815 13.38 1.00
user rollbacks 0 0.00 0.00
redo blocks written 273,363 76.17 5.32
redo buffer allocation retries 6 0.00 0.00
redo entries 179,992 50.15 3.50
redo log space requests 6 0.00 0.00
redo log space wait time 18 0.01 0.00
redo ordering marks 2,997 0.84 0.06
redo size 120,725,932 33,639.71 2,347.93
redo subscn max counts 5,816 1.62 0.11
redo synch time 12,977 3.62 0.25
redo synch writes 66,985 18.67 1.30
redo wastage 14,665,132 4,086.37 285.21
redo write time 11,358 3.16 0.22
redo writer latching time 6 0.00 0.00
redo writes 52,521 14.63 1.02
user commits 51,418 14.33 1.00
user rollbacks 0 0.00 0.00Edited by: PktAces on Oct 1, 2008 1:45 PMMr Lewis,
Here's the results from the histogram query, the two sets of values were gathered about 15 minutes apart, during a slower than normal activity time.
105 log file parallel write 1 714394
105 log file parallel write 2 289538
105 log file parallel write 4 279550
105 log file parallel write 8 58805
105 log file parallel write 16 28132
105 log file parallel write 32 10851
105 log file parallel write 64 3833
105 log file parallel write 128 1126
105 log file parallel write 256 316
105 log file parallel write 512 192
105 log file parallel write 1024 78
105 log file parallel write 2048 49
105 log file parallel write 4096 31
105 log file parallel write 8192 35
105 log file parallel write 16384 41
105 log file parallel write 32768 9
105 log file parallel write 65536 1
105 log file parallel write 1 722787
105 log file parallel write 2 295607
105 log file parallel write 4 284524
105 log file parallel write 8 59671
105 log file parallel write 16 28412
105 log file parallel write 32 10976
105 log file parallel write 64 3850
105 log file parallel write 128 1131
105 log file parallel write 256 316
105 log file parallel write 512 192
105 log file parallel write 1024 78
105 log file parallel write 2048 49
105 log file parallel write 4096 31
105 log file parallel write 8192 35
105 log file parallel write 16384 41
105 log file parallel write 32768 9
105 log file parallel write 65536 1 -
'log file sync' versus 'log file prallel write'
I have been asked to run an artificial test that performs a large number of small insert-only transactions with a high degree (200) of parallelism. The COMMITS were not inside a PL/SQL loop so a 'log file sync' (LFS) event occured each COMMIT. I have measured the average 'log file parallel write' (LFPW) time by running the following PL/SQL queries at the beginning and end of a 10 second period:
SELECT time_waited,
total_waits
INTO wait_start_lgwr,
wait_start_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
SELECT time_waited,
total_waits
INTO wait_end_lgwr,
wait_end_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
I took the difference in TIME_WAITED and divided it by the difference in TOTAL_WAITS.
I did the same thing for LFS.
What I expected was that the LFS time would be just over 50% more than the LFPW time: when the thread commits it has to wait for the previous LFPW to complete (on average half way through) and then for its own.
Now I know there is a lot of CPU related stuff that goes on in LGWR but I 'reniced' it to a higher priority and could observe that it was then spending 90% of its time in LFPW, 10% ON CPU and no time idle. Total system CPU time averaged only 25% on this 64 'processor' machine.
What I saw was that the LFS time was substantially more than the LFPW time. For example, on one test LFS was 18.07ms and LFPW was 6.56ms.
When I divided the number of bytes written each time by the average 'commit size' it seems that LGWR is writing out data for only about one third of the average number of transactions in LFS state (rather than the two thirds that I would have expected). When the COMMIT was changed to COMMIT WORK NOWAIT the size of each LFPW increased substantially.
These observations are at odds with my understanding of how LGWR works. My understanding is that when LGWR completes one LFPW it begins a new one with the entire contents of the log buffer at that time.
Can anybody tell me what I am missing?
P.S. Same results in database versions 10.2 Sun M5000 and 11.2 HP G7s.I have been asked to run an artificial test that performs a large number of small insert-only transactions with a high degree (200) of parallelism. The COMMITS were not inside a PL/SQL loop so a 'log file sync' (LFS) event occured each COMMIT. I have measured the average 'log file parallel write' (LFPW) time by running the following PL/SQL queries at the beginning and end of a 10 second period:
SELECT time_waited,
total_waits
INTO wait_start_lgwr,
wait_start_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
SELECT time_waited,
total_waits
INTO wait_end_lgwr,
wait_end_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
I took the difference in TIME_WAITED and divided it by the difference in TOTAL_WAITS.
I did the same thing for LFS.
What I expected was that the LFS time would be just over 50% more than the LFPW time: when the thread commits it has to wait for the previous LFPW to complete (on average half way through) and then for its own.
Now I know there is a lot of CPU related stuff that goes on in LGWR but I 'reniced' it to a higher priority and could observe that it was then spending 90% of its time in LFPW, 10% ON CPU and no time idle. Total system CPU time averaged only 25% on this 64 'processor' machine.
What I saw was that the LFS time was substantially more than the LFPW time. For example, on one test LFS was 18.07ms and LFPW was 6.56ms.
When I divided the number of bytes written each time by the average 'commit size' it seems that LGWR is writing out data for only about one third of the average number of transactions in LFS state (rather than the two thirds that I would have expected). When the COMMIT was changed to COMMIT WORK NOWAIT the size of each LFPW increased substantially.
These observations are at odds with my understanding of how LGWR works. My understanding is that when LGWR completes one LFPW it begins a new one with the entire contents of the log buffer at that time.
Can anybody tell me what I am missing?
P.S. Same results in database versions 10.2 Sun M5000 and 11.2 HP G7s. -
We have just deployed a 4-node RAC cluster on 10GR2. We force a log switch every 5 minutes to ensure our Dataguard standby site is relatively up to date, we use the ARCH to ship logs. We are running to a very fast HP XP 12000 with massive amounts of write cache, so we never actually write straight to disk. However everytime we do a log switch and archive the log, we see a massive spike in the log file sync event. This is a real-time billing system so we monitor transaction response times in ms. Our response time for a transaction can go from 8ms to around 500ms.
I can't understand why this is happening, not only are our disks fast but we are also using asynch I/O and ASM. Surely with asynch I/O you should never wait for a write to complete.Log file sync event happens when client wait for LGWR finishes write to the log file after client said 'commit'. The way to reduce the number of the 'Log file sync' events is to increase the speed of LGWR process or not to commit that often.
You've described your disk system as very fast - what is the amount of data you write on every log switch? How does the performance of this write relates to your disk system tests? what block size did you use when testing the disk system? as far as I remember the LGWR uses OS block size and not the DB block size to write data to the disk. Try to experiment on your test system - put your log files on the virtual disk created in RAM and run the test case - do you see the delays?
With such restrictions for the transaction time you may want to look at Oracle Times-Ten database (http://www.oracle.com/database/timesten.html)
Since you've mentioned the 10gR2 you could probably use the new feature - asynchronous commit - in this case your transaction will not wait for the LGWR process. Be aware that using the NOWAIT commit opens a small possibility of data loss - the doc describes it quite clear.
http://download-east.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_sqlproc.htm#CIHEDGBF
Mike -
Hi,
Maybe someone can help me on this.
We have a RAC database in production that (for some) applications need a response of 0,5 seconds. In general that is working.
Outside of production hours we make a weekly full backup and daily incremental backup so that is not bothering us. However as soon as we make an archive backup or a backup of the control file during production hours we have a problem as the application have to wait for more then 0,5 seconds for a respons caused by the event "log file sync" with wait class "Commit".
I already adjusted the script for RMAN so that we use only have 1 files per set and also use one channel. However that didn't work.
Increasing the logbuffer was also not a success.
Increasing Large pool is in our case not an option.
We have 8 redolog groups with each 2 members ( each 250 Mb) and an average during the day of 12 logswitches per hour which is not very alarming. Even during the backup the I/O doesn't show very high activity. The increase of I/O at that moment is minor but (maybe) apperantly enough to cause the "log file sync".
Oracle has no documentation that gives me more possible causes.
Strange thing is that before the first of October we didn't have this problem and there were no changes made.
Has anyone an idea where to look further or did anyone experience a thing like this and was able to solve it?
Kind regardsThe only possible contention I can see is between the log writer and the archiver. 'Backup archivelog' in RMAN means implicitly 'ALTER SYSTEM ARCHIVE LOG CURRENT' (log switch and archiving the online log).
You should alternate redo logs on different disks to minimize the effect of the archiver on the log writer.
Werner -
Log file sync during RMAN archive backup
Hi,
I have a small question. I hope someone can answer it.
Our database(cluster) needs to have a response within 0.5 seconds. Most of the time it works, except when the RMAN backup is running.
During the week we run one time a full backup, every weekday one incremental backup, every hour a controlfile backup and every 15 minutes an archival backup.
During a backup reponse time can be much longer then this 0.5 seconds.
Below an typical example of responsetime.
EVENT: log file sync
WAIT_CLASS: Commit
TIME_WAITED: 10,774
It is obvious that it takes very long to get a commit. This is in seconds. As you can see this is long. It is clearly related to the RMAN backup since this kind of responsetime comes up when the backup is running.
I would like to ask why response times are so high, even if I only backup the archivelog files? We didn't have this problem before but suddenly since 2 weeks we have this problem and I can't find the problem.
- We use a 11.2G RAC database on ASM. Redo logs and database files are on the same disks.
- Autobackup of controlfile is off.
- Dataguard: LogXptMode = 'arch'
Greetings,Hi,
Thank you. I am new here and so I was wondering how I can put things into the right category. It is very obvious I am in the wrong one so I thank the people who are still responding.
-Actually the example that I gave is one of the many hundreds a day. The respone times during the archive backup is most of the time between 2 and 11 seconds. When we backup the controlfile with it, it is for sure that these will be the response times.
-The autobackup of the controfile is put off since we already have also a backup of the controlfile every hour. As we have a backup of archivefiles every 15 minutes it is not necessary to also backup the controlfile every 15 minutes, specially if that even causes more delay. Controlfile is a lifeline but if you have properly backupped your archivefiles, a full restore with max 15 minutes of data loss is still possible. We put autobackup off since it is severely in the way of performance at the moment.
As already mentioned for specific applications the DB has to respond in 0,5 seconds. When it doesn’t happen then an entry will be written in a table used by that application. So I can compare the time of failure with the time of something happening. The times from the archivelog backup and the failure match in 95% of the cases. It also show that log file sync at that moment is also part of this performance issue. I actually built a script that I used for myself to determine out of the application what the cause is of the problem;
select ASH.INST_ID INST,
ASH.EVENT EVENT,
ASH.P2TEXT,
ASH.WAIT_CLASS,
DE.OWNER OWNER,
DE.OBJECT_NAME OBJECT_NAME,
DE.OBJECT_TYPE OBJECT_TYPE,
ASH.TIJD,
ASH.TIME_WAITED TIME_WAITED
from (SELECT INST_ID,
EVENT,
CURRENT_OBJ#,
ROUND(TIME_WAITED / 1000000,3) TIME_WAITED,
TO_CHAR(SAMPLE_TIME, 'DD-MON-YYYY HH24:MI:SS') TIJD,
WAIT_CLASS,
P2TEXT
FROM gv$active_session_history
WHERE PROGRAM IN ('yyyyy', 'xxxxx')) ASH,
(SELECT OWNER, OBJECT_NAME, OBJECT_TYPE, OBJECT_ID FROM DBA_OBJECTS) DE
WHERE DE.OBJECT_id = ASH.CURRENT_OBJ#
AND ASH.TIME_WAITED > 2
ORDER BY 8,6
- Our logfiles are 250M and we have 8 groups of 2 members.
- Large pool is not set since we use memory_max_target and memory_target . I know that Oracle maybe doesn’t use memory well with this parameter so it is truly a thing that I should look into.
- I looked for the size of the logbuffer. Actually our logbuffer is 28M which in my opinion is very large so maybe I should put it even smaller. It is very well possible that the logbuffer is causing this problem. Thank you for the tip.
- I will also definitely look into the I/O. Eventhough we work with ASM on raid 10 I don’t think it is wise to put redo logs and datafiles on the same disks. Then again, it is not installed by me. So, you are right, I have to investigate.
Thank you all very much for still responding even if I put this in the totally wrong category.
Greetings,
Maybe you are looking for
-
Is it possible to just simply sync videos to Fifth generation ipod?
First of all I am new with the ipod, and I am not familiar with this certain issues regarding videos on the ipod" My question is that do you have to watch videos or TV shows that you download on itunes? Or can you just simply automatically sync the v
-
Apex 4 Cascading LOV's how to set a child LOV back to null return value
Hi, My problem is I am using the new Dynamic Actions to implement Cascading LOV functionality in Apex 4 which means not having to submit the page to update child LOVs after parent LOV changes. What I am finding is that when you change the parent LOV
-
Gurus..need help in reading data from virtual infocube
Gurus, I have to read data from an virtual infocube...I am trying to use FM RSDRI_INFOPROV_READ to read data but it doesn't work.. I am doing exactly what has been done in the demo program RSDRi_INFOPROV_READ_DEMO... Please help me...its really URGEN
-
I have never been able to open my itunes store. I was on the phone with support for 1.5 hours today, no luck. I would like to purchase a book only available on itunes. Can someone please help! I have tried - setting time and date - uninstalling -
-
Cant update iApps installed under my old ID. I try uninstalling them and reinstalling them as new Apps from Featured but they don't show as new Apps in Featured but as Updates.