Corrutpion in Logfile

Hi,
We are using Oracle 7.3 and we had a system crash. After i boot up the system and tried to open the database, i got a message that one of the redo log(current) was corrupted and hence the database cannot opened. We are in nonarchived mode and since we are using raid5, we haven't multiplexed the files. I cannot drop the log (As i cannot open the database,i cannot switch the logfile, drop and rebuild again). Now is there any methord to drop the log, and rebuild the log again? (The final option is to restore the whole system through "UFSRESTORE" since total storage of all HDD is only 16gb and then take a import of the schema).
thanks
Arun

Hi,
We are using Oracle 7.3 and we had a system crash. After i boot up the system and tried to open the database, i got a message that one of the redo log(current) was corrupted and hence the database cannot opened. We are in nonarchived mode and since we are using raid5, we haven't multiplexed the files. I cannot drop the log (As i cannot open the database,i cannot switch the logfile, drop and rebuild again). Now is there any methord to drop the log, and rebuild the log again? (The final option is to restore the whole system through "UFSRESTORE" since total storage of all HDD is only 16gb and then take a import of the schema).
thanks
Arun

Similar Messages

  • View logfile or view output hangs in 3 node environment

    Hi All,
    Briefing the environment:
    Database -- ERPDB001
    Conc + admin -- APPS001
    Forms + web -- APPS002
    Some times when we try to see view log or view output after running the Concurrent requests. The system seems like hanging and it will display following error after. If you check in the next minute, it will show the output/logfile. looks like a strange behavior
    "An error occurred while attempting to establish an Application File Server connection with the node APPS001. There may be a network configuration problem, or the TNS listener on node APPS001 may not be running, Please contact your system administrator."
    Is this the Oracle Apps issue or the network issue between two node.
    Regards
    Vasu

    05-JUL-2008 09:51:56 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST
    =<APPS002IP>)(PORT=56065)) * establish * FNDFS * 0
    05-JUL-2008 09:51:59 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST
    =<APPS002IP>)(PORT=56070)) * establish * FNDFS * 0
    05-JUL-2008 09:52:16 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST
    =<APPS002IP>)(PORT=56087)) * establish * FNDFS * 0
    05-JUL-2008 10:20:28 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=<HostnameofAPPS001>)(US
    ER=applprod))(COMMAND=status)(ARGUMENTS=64)(SERVICE=APPS_PROD)(VERSION=134243072
    )) * status * 0
    05-JUL-2008 10:20:28 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=<HostnameofAPPS001>)(US
    ER=applprod))(COMMAND=stop)(ARGUMENTS=64)(SERVICE=APPS_PROD)(VERSION=134243072))
    * stop * 0
    05-JUL-2008 09:51:56 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=56065)) * establish * FNDFS * 0
    05-JUL-2008 09:51:59 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=56070)) * establish * FNDFS * 0
    05-JUL-2008 09:52:16 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=56087)) * establish * FNDFS * 0
    05-JUL-2008 10:20:28 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=<HostnameofAPPS001>)(USER=applprod))(COMMAND=status)(ARGUMENTS=64)(SERVICE=APPS_PROD)(VERSION=134243072)) * status * 0
    05-JUL-2008 10:20:28 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=<HostnameofAPPS001>)(USER=applprod))(COMMAND=stop)(ARGUMENTS=64)(SERVICE=APPS_PROD)(VERSION=134243072)) * stop * 0
    TNSLSNR for Solaris: Version 8.0.6.3.0 - Production on 05-JUL-2008 10:25:03
    (c) Copyright 1999 Oracle Corporation. All rights reserved.
    Log messages written to /u02/applprod/prodora/8.0.6/network/admin/apps_prod.log
    Listening on: (ADDRESS=(PROTOCOL=tcp)(DEV=10)(HOST=<IPofAPPS001>)(PORT=1676))
    TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
    05-JUL-2008 10:25:03 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=<HostnameofAPPS001>)(USER=applprod))(COMMAND=status)(ARGUMENTS=64)(SERVICE=APPS_PROD)(VERSION=134243072)) * status * 0
    05-JUL-2008 10:25:34 * ping * 0
    05-JUL-2008 10:25:35 * ping * 0
    05-JUL-2008 10:25:35 * (CONNECT_DATA=(SID=FNDSM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<IPofAPPS001>)(PORT=37849)) * establish * FNDSM * 0
    05-JUL-2008 10:41:53 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=59076)) * establish * FNDFS * 0
    05-JUL-2008 10:47:52 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=59435)) * establish * FNDFS * 0
    05-JUL-2008 11:00:34 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=60186)) * establish * FNDFS * 0
    05-JUL-2008 11:16:33 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=61139)) * establish * FNDFS * 0
    05-JUL-2008 11:33:41 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=62210)) * establish * FNDFS * 0
    05-JUL-2008 11:34:06 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=62236)) * establish * FNDFS * 0
    05-JUL-2008 11:38:47 * 12502
    TNS-12502: TNS:listener received no CONNECT_DATA from client
    05-JUL-2008 11:46:32 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=62977)) * establish * FNDFS * 0
    05-JUL-2008 12:12:37 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=64546)) * establish * FNDFS * 0
    05-JUL-2008 12:12:39 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=64548)) * establish * FNDFS * 0
    05-JUL-2008 12:13:37 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=64609)) * establish * FNDFS * 0
    05-JUL-2008 12:26:42 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=65439)) * establish * FNDFS * 0
    05-JUL-2008 12:26:44 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=65444)) * establish * FNDFS * 0
    05-JUL-2008 12:27:22 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=65485)) * establish * FNDFS * 0
    05-JUL-2008 12:28:38 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=32810)) * establish * FNDFS * 0
    05-JUL-2008 12:30:14 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=32904)) * establish * FNDFS * 0
    05-JUL-2008 12:30:27 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=32918)) * establish * FNDFS * 0
    05-JUL-2008 12:51:17 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=34177)) * establish * FNDFS * 0
    05-JUL-2008 13:47:59 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=37909)) * establish * FNDFS * 0
    05-JUL-2008 13:48:02 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=37914)) * establish * FNDFS * 0
    05-JUL-2008 14:15:10 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=39543)) * establish * FNDFS * 0
    05-JUL-2008 14:15:12 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=39548)) * establish * FNDFS * 0
    05-JUL-2008 14:25:03 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=40157)) * establish * FNDFS * 0
    05-JUL-2008 14:25:07 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=40162)) * establish * FNDFS * 0
    05-JUL-2008 14:25:17 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=39984)) * establish * FNDFS * 0
    05-JUL-2008 14:26:19 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=40236)) * establish * FNDFS * 0
    05-JUL-2008 16:32:50 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=47917)) * establish * FNDFS * 0
    05-JUL-2008 16:32:55 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=47925)) * establish * FNDFS * 0
    =========================
    We found not much errors.
    One time the error is
    "05-JUL-2008 11:38:47 * 12502
    TNS-12502: TNS:listener received no CONNECT_DATA from client"
    But I used to get above Original error (Core issue) lot of times in the Application.
    Even users are complaining about this.
    Thanks and Regards
    Vasu

  • Linux logfile monitoring does not work after using "privileged datasource"

    Hello!
    I have noticed a strange behaviour on one of my Linux Agents (lets call it server_a) regarding logfile monitoring with the "Microsoft.Unix.SCXLog.Datasource" and the "Microsoft.Unix.SCXLog.Privileged.Datasource".
    After successfully testing monitoring of /var/log/messages on server_a with the "Privileged Datasource". This test has been on server_a and the MP containing this rule has been delete from the management gorup before the following tests.
    I wanted to test another logfile (lets call it logfile_a) using the normal datasource "Microsoft.Unix.SCXLog.Datasource" on server_a. So I created the usual logfile rule (rule_a) in XML (which I have done countless times before) for monitoring
    logfile_a. Logfile_a has been created by the "Linux Action Account User" with reading rights for everyone. After importing the Management Pack with the monitoring  for logfile_a I had the following warning alert in the scom console managing
    server_a:
      Fehler beim Überprüfen der Protokolldatei "/home/ActionAccountUser/logfile_a" auf dem Host "server_a" als Benutzer "<SCXUser><UserId>ActionAccountUser</UserId><Elev></Elev></SCXUser>";
    An internal error occurred.  (the userid has been changed to keep the anonimity of our action account).
    To make sure I did not make any mistakes in the XML i have created a new logfile rule (rule_b) monitoring "logfile_b" on "server_a" using the "Logfile Template" under the authoring tab. logfile_b was also created by the "Linux
    Action Account User" and had reading rights for everyone. Unfortunately this logfile rule created the same error:
      Fehler beim Überprüfen der Protokolldatei "/home/ActionAccountUser/logfile_b" auf dem Host "server_a" als Benutzer "<SCXUser><UserId>ActionAccountUser</UserId><Elev></Elev></SCXUser>";
    An internal error occurred.  (the userid has been changed to keep the anonimity of our action account).
    Although both rules (rule_a and rule_b) used the "Microsoft.Unix.SCXLog.Datasource" which uses the Action Account for monitoring logfiles, the above error looks to me as SCOM wants to use the privileged user, which in this case it not necessary
    as the Action Account can read logfile_a and logfile_b without any problems.
    So after a few unsuccessfull tries to get both rules to raise an alert I tried to use the "Microsoft.Unix.SCXLog.Privileged.Datasource" for rule_a as last resort. Then suddenly after importing the updated Management Pack I finally received the
    alert I desperately waited for this whole time.
    Finally after lot of text here are my questions:
    Could it be that the initial test with the Privileged Log Datasource somehow screwed up the agent on server_a so it could not monitor logfiles with the standard log datasource? Or may anyone of you might have an idea what went wrong here.
    Like I said both logfile could be accessed and changed by the normal Action Account without any problems. So privileged right are not needed. I even restarted the scom agent in case something hanged.
    I hope I could make the problm clear to you. If not, don´t hesitate to ask any questions.
    Thank you and kind regards,
    Patrick

    Hello!
    After all that text, I fogrot the most essential information..
    We are currently using OpsMgr 2012 SP1 UR4, the monitored server (server_a) has agent version 1.4.1-292 installed.
    Thanks for the explanation of how the logprovider works. I tried to execute the logfilereader just to see if there are any errors and everything looks fine to me:
    ActionAccount @server_a:/opt/microsoft/scx/bin> ./scxlogfilereader -v
    Version: 1.4.1-292 (Labeled_Build - 20130923L)
    Here are the latest entry in the scx.log file:
    * Microsoft System Center Cross Platform Extensions (SCX)
    * Build number: 1.4.1-292 Labeled_Build
    * Process id: 23186
    * Process started: 2014-03-31T08:29:09,136Z
    * Log format: <date> <severity>     [<code module>:<process id>:<thread id>] <message>
    2014-03-31T08:29:09,138Z Warning    [scx.logfilereader.ReadLogFile:23186:140522274359072] scxlogfilereader - Unexpected exception: Could not find persisted data: Failed to access filesystem item /var/opt/microsoft/scx/lib/state/ActionAccount/LogFileProvider__ActionAccount_shome_sActionAccount_slogfilewithoutsudo.txtEDST02
    2014-03-31T08:29:09,138Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] LogFileProvider InvokeLogFileReader - Exception: Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4
    2014-03-31T08:29:09,138Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] BaseProvider::InvokeMethod() - Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4 - [/home/serviceb/ScxCore_URSP1_SUSE_110_x64/source/code/providers/logfile_provider/logfileprovider.cpp:442]
    * Microsoft System Center Cross Platform Extensions (SCX)
    * Build number: 1.4.1-292 Labeled_Build
    * Process id: 23284
    * Process started: 2014-03-31T08:30:06,139Z
    * Log format: <date> <severity>     [<code module>:<process id>:<thread id>] <message>
    2014-03-31T08:30:06,140Z Warning    [scx.logfilereader.ReadLogFile:23284:140016517941024] scxlogfilereader - Unexpected exception: Could not find persisted data: Failed to access filesystem item /var/opt/microsoft/scx/lib/state/ActionAccount/LogFileProvider__ActionAccount_shome_sActionAccount_stest.txtEDST02
    2014-03-31T08:30:06,142Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] LogFileProvider InvokeLogFileReader - Exception: Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4
    2014-03-31T08:30:06,143Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] BaseProvider::InvokeMethod() - Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4 - [/home/serviceb/ScxCore_URSP1_SUSE_110_x64/source/code/providers/logfile_provider/logfileprovider.cpp:442]
    Strangely I could not acces the "Action Account User" directory under /var/opt/microsoft/scx/log as "ActionAccount" user. Is it ok for the directory to have the following rights:  drwx------ 2 1001 users? Instead of "1001" it should say "ActionAccount",
    right?
    This could be a bit far fetched, but perhaps the logfile provider can´t access logfiles as the "ActionAccount" on this server because it needs to write in the scx.log file. But as the "ActionAccount" can´t access the file, the logfile provider throws
    an error. And as "Privileged Account" the rule works flawlessly, as the logfile provider running in root context can access everything.
    Don´t know if that makes sense, but right now it sounds logical to me.

  • Logfile Generation utilizing "Excel" (Creating and Appending Report)

    All,
    As always, thanks for the help you have given me in the past....especially the Vets. I have tried to figure out a solution to my issue from the message board, but no solution seems to fit what I am doing.
    Here is my situation...... I am using Labview to test my product one unit at a time. I have always used Teststand and report generation from there, but this time it is strictly Labview. This is my first attempt to create a logfile with Excel that appends one xls file everytime one unit is tested.
    The way my test is set up now, I test and collect the data in an array for when I created the logfile generation VI. I took several stabs at it, looked at examples, but cant figure out the direction I need to go to create this. Here is the parameteres necessary for the logfile (spreadsheet).
    -All UUT's will go into one spreadsheet and the spreadsheet will be appended by adding new data in next available row.
    -Data is imported to spreadsheet in array format.
    -Test data that passes will be green, test data that fails will be red (I can figure this out, but this is why I need to use Excel)
    -I want to use Excel so I have more flexibility for graphs and things of that nature in the future.
    It seems rather simple, but not for me.....lol. If I go to the Report Generation Toolkit, i  see "Create Report" and "Append Report"....but Append Report still wants the "report input" node wired. What do I wire that to? For example, if I have an excel spreadsheet called hangover.xls, do I somehow wire hangover.xls to the input? I am having trouble finding answers. I would really appreciate a simple JPG or VI so I can understand the setup for what I want to do.
    Comments and links to threads/help appreciated!
    Ryan

    Hi Evan,
    Thanks for the other examples....I thought I was going to be able to manipulate them into what I want, but ended up spending about 6 hours playing with it and up to 2am. I am getting so frustrated with this. This is new ground for me, I never have experimented with logfile creation. I am sorry to keep bothering you with this but I am ready to pull my hair out. I attached a couple Vi's....Spreadsheet import is the main VI and report.vi is the sub.....i need to rename them better but haven't got there.
    First off, that VI you posted that I couldn't open, could you just take a JPG of the block diagram? That would really help.
    I need to create a spreadsheet with logfile data in rows. The spreadsheet is to be appended for each unit under test. Each unit under test gets one row and all data is written at the end of the test. If you look at the spreadsheet_import.vi, I am basically taking a bunch of 1D arrays of data to create one long 1D array for one row.
    Every month a new spreadsheet is created (so log file data is divided into months) , and that is what the report.vi does....it looks to see if the filename is already created and if not, sends a boolean to the write to spreadsheet file to append. I reverted to "write to spreadsheet" because for the life of me, I cannot figure out how to use the worksheet portion to do this. I would think this should be pretty simple, but I cannot figure out and its not for lack of trying.
     If I use "write to spreadsheet", I am going to run into problems because I ultimately want to use a excel template with formulas but if I can figure it out, this will have to do.
    All I really want to do is to create a spreadsheet if one doesnt exist or append if it does, combine all my 1d array data, and create one row with this data. The other issue I ran into before is I cant figure out how to tell Excel where the next row is.......UUGHHHH! This is definitely stressing me out as I have a deadline and I will gladly sent a case of beer to Norway for the help received.
    Dying Here,
    Ryan
    Attachments:
    Spreadsheet_import.vi ‏14 KB
    report.vi ‏33 KB

  • How do I replace a single record in a logfile?

    I'm developing a program that saves data to a logfile containing clusters. Every record (cluster) contains data from the tested item. The item can be as many as 400+. The file refnum is opened during the whole test. If the test of item 50 (this will be record 49) failed and needs to tested again I want to replace position 49 (counting from zero) with the "correct" test result, but LabVIEW insists and writes them as record 50 and thus item 50 has records as: 49 (failed; want to delete this) and the correct record 50 (this should be record 49 though...). I thought one should connect the "pos offset" at the desired place where the record should be stored in, but one can't connect to that when using datalog file... . I've also
    tried to use the "seek"-vi to position the refnum at the desired location but it ignores it... . What shall I do? I have a working "viewer" for my log-files so they aren't corrupted, everything else is OK. Any suggestions?
    //Anders Boussard

    You do not have to use dataloging, you can write the data directly to disk. Writing to a file in binary is much less structured than datalogging but gives the flexibilty of overwriting records. With binary files you can even write several groups of data at a time so the limiting factor tends to be the speed of your harddrive. Consider using the intermediate file write VIs. As an test, I was able to write different clusters of an arrays having different lenghts to a file and read them perfectly. If your data is of uniform structure, you will have no problems doing the same. Look at the read to binary file.vi and write to binary file.vi for example code to get started. You will have to experiment to get the hang of it.
    Chapter 13 of th
    e LabVIEW User Manual discusses how to do file I/O and you can also find example code and tutorials on our website.
    Jeremy Braden
    National Instruments

  • How to specify logfile size at the time of adding to a member.

    Hi All,
    I am in the process of upgrading Oracle 9.0 to 10.1.
    I am following manual upgrad process.As per the recomendation from the pre-upgrade information script,i need to recreate redo-log files.
    Logfiles: [make adjustments in the current environment]
    --> E:\ORACLE\ORADATA\PRODB229\REDO03.LOG
    .... status="INACTIVE", group#="1"
    .... current size="1024" KB
    .... suggested new size="10" MB
    --> E:\ORACLE\ORADATA\PRODB229\REDO02.LOG
    .... status="INACTIVE", group#="2"
    .... current size="1024" KB
    .... suggested new size="10" MB
    --> E:\ORACLE\ORADATA\PRODB229\REDO01.LOG
    .... status="CURRENT", group#="3"
    .... current size="1024" KB
    .... suggested new size="10" MB
    WARNING: one or more log files is less than 4MB.
    Create additional log files larger than 4MB, drop the smaller ones and then
    upgrade.i can add redo member by the below command,but not able to specicy the size as 10M.I did some googling but no luck with that..
    SQL> ALTER DATABASE ADD LOGFILE MEMBER 'E:\oracle\oradata\prodb229\REDO01.rdo' T
    O GROUP 1;
    but it fails
    ALTER DATABASE ADD LOGFILE MEMBER 'E:\oracle\oradata\prodb229\REDO01.rdo' TO GROUP 2 SIZE 10M;
    ERROR at line 1:
    ORA-00933: SQL command not properly ended
    ~Thnx

    If you add a logfile to an existing group, you cannot specify the size for that file.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_1004.htm#i2079942
    <quote>
    ADD [STANDBY] LOGFILE MEMBER Clause Use the ADD LOGFILE MEMBER clause to add new members to existing redo log file groups. Each new member is specified by 'filename'. If the file already exists, it must be the same size as the other group members, and you must specify REUSE. If the file does not exist, Oracle Database creates a file of the correct size. You cannot add a member to a group if all of the members of the group have been lost through media failure.
    <quote>

  • Error while running adlnkoh.sh. Please check logfile

    Hi All,
    i got this error while cloning
    i ran perl adcfgclone.pl dbTechStack on db tier
    Starting ORACLE_HOME relinking...
    Instantiating adlnkoh.sh
    Starting relink of ORACLE_HOME - RDBMS
    Adding execute permission to :
    /erpapp/prod/proddb/10.2.0/appsutil/install/adlnkoh.sh
    Executing cmd :
    /erpapp/prod/proddb/10.2.0/appsutil/install/adlnkoh.sh
    adlnkoh.sh started at Wed Aug 3 13:57:29 UAE 2011
    logfile located in /erpapp/prod/proddb/10.2.0/install/make.log
    Error while running adlnkoh.sh. Please check logfile
    .end std out.
    .end err out.
    RC-00110: Error occurred while relinking of rdbms
    Raised by oracle.apps.ad.clone.ApplyDBTechStack
    RC-00119: Error occurred while relinking {0}
    Raised by oracle.apps.ad.clone.ApplyDBTechStack
    Completed relinking.
    ApplyDBTechStack Completed Successfully.
    when i checked the relink log
    Starting: "make -if /erpapp/prod/proddb/10.2.0/rdbms/lib/ins_rdbms.mk rac_off"
    rm -f /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl
    rm: /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl not removed. Text file busy
    *** Error exit code 2 (ignored)
    cp /erpapp/prod/proddb/10.2.0/lib//libskgxpd.sl /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl
    cp: cannot create /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl: Text file busy
    *** Error exit code 1 (ignored)
    rm -f /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl
    rm: /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl not removed. Text file busy
    *** Error exit code 2 (ignored)
    cp /erpapp/prod/proddb/10.2.0/lib//libskgxns.sl \
    /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl
    This the error.
    Please help me to reslove this error.

    ebs -11.5.10.2 and db is 10.2.0.4
    OS -HPUX-11.23
    the erro i have post are from logs
    This error i got after the adcfgclone on Dbtier..
    Starting ORACLE_HOME relinking...
    Instantiating adlnkoh.sh
    Starting relink of ORACLE_HOME - RDBMS
    Adding execute permission to :
    /erpapp/prod/proddb/10.2.0/appsutil/install/adlnkoh.sh
    Executing cmd :
    /erpapp/prod/proddb/10.2.0/appsutil/install/adlnkoh.sh
    adlnkoh.sh started at Wed Aug 3 13:57:29 UAE 2011
    logfile located in /erpapp/prod/proddb/10.2.0/install/make.log
    Error while running adlnkoh.sh. Please check logfile
    .end std out.
    .end err out.
    RC-00110: Error occurred while relinking of rdbms
    Raised by oracle.apps.ad.clone.ApplyDBTechStack
    RC-00119: Error occurred while relinking {0}
    Raised by oracle.apps.ad.clone.ApplyDBTechStack
    Completed relinking.
    ApplyDBTechStack Completed Successfully.
    This error is when i checked the /erpapp/prod/proddb/10.2.0/install/make.log
    Starting: "make -if /erpapp/prod/proddb/10.2.0/rdbms/lib/ins_rdbms.mk rac_off"
    rm -f /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl
    rm: /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl not removed. Text file busy
    *** Error exit code 2 (ignored)
    cp /erpapp/prod/proddb/10.2.0/lib//libskgxpd.sl /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl
    cp: cannot create /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl: Text file busy
    *** Error exit code 1 (ignored)
    rm -f /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl
    rm: /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl not removed. Text file busy
    *** Error exit code 2 (ignored)
    cp /erpapp/prod/proddb/10.2.0/lib//libskgxns.sl \
    /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl
    cp: cannot create /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl: Text file busy
    *** Error exit code 1 (ignored)
    ar cr /erpapp/prod/proddb/10.2.0/rdbms/lib/libknlopt.a /erpapp/prod/proddb/10.2.0/rdbms/lib/ksnkcs.o
    Completed...
    Starting: "make -if /erpapp/prod/proddb/10.2.0/rdbms/lib/ins_rdbms.mk ioracle"
    chmod 755 /erpapp/prod/proddb/10.2.0/bin
    mv -f /erpapp/prod/proddb/10.2.0/bin/oracle /erpapp/prod/proddb/10.2.0/bin/oracleO
    mv: /erpapp/prod/proddb/10.2.0/bin/oracleO: cannot write: Text file busy
    *** Error exit code 1 (ignored)
    mv /erpapp/prod/proddb/10.2.0/rdbms/lib/oracle /erpapp/prod/proddb/10.2.0/bin/oracle
    mv: /erpapp/prod/proddb/10.2.0/bin/oracle: cannot write: Text file busy
    *** Error exit code 1 (ignored)
    chmod 6751 /erpapp/prod/proddb/10.2.0/bin/oracle
    Completed...
    even after this error /warning i ran
    Perl adcfgclone.pl dbconfig
    i got this error
    Verifying Database Connection...
    RC-40201: Unable to connect to Database pcln.
    Enter the Database listener port [1521]:1521
    RC-40201: Unable to connect to Database pcln.

  • Create ONLINE logfile in physical standby database

    We have created a physical standby database with rman duplicate command on a remote server
    "duplicate target database for standby dorecover nofilenamecheck"
    When I see the standby server...Online logfiles are not created however its relevant entries are there in V$LOG and V$LOGFILE views.
    I guess it is the default behaviour of duplicate command in RMAN and we can not specify LOGFILE clause when we create standby database.
    Now the problem is we could not drop the online logfile on standby database since it's status is "CURRENT", "ACTIVE".
    Since the ONLINE LOGFILE are not actually created , "ALTER DATABASE CLEAR LOGFILE GROUP " command returns with error as it could not find the file in the server.
    So How we can drop the current/active online logfile and add new ones in standby db?

    I'm assuming you have physical standby. Here are step I did in the past.
    1) create a backup control file
    2) bring the database back using the "recreate control file" it the trace file BUT you need to remove or comment out the line that has the corrupt or missing redo log file. And don't forget to add the tempfile after you recreate the controlfile
    example:
    alter database backup controlfile to trace;
    STARTUP NOMOUNT
    CREATE CONTROLFILE REUSE DATABASE "ORCL" NORESETLOGS FORCE LOGGING ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
    LOGFILE
    GROUP 1 '/oracledata/orcl/redo01.log' SIZE 200M,
    GROUP 2 '/oracledata/orcl/redo02.log' SIZE 200M,
    GROUP 3 '/oracledata/orcl/redo03.log' SIZE 200M,
    # GROUP 3 '/oracledata/orcl/redo03.log' SIZE 200M
    -- STANDBY LOGFILE
    -- GROUP 10 '/oracledata/orcl/redostdby04.log' SIZE 200M,
    -- GROUP 11 '/oracledata/orcl/redostdby05.log' SIZE 200M
    DATAFILE
    '/oracledata/orcl/system01.dbf',
    '/oracledata/orcl/undotbs01.dbf',
    '/oracledata/orcl/sysaux01.dbf',
    '/oracledata/orcl/users01.dbf'
    CHARACTER SET WE8ISO8859P1
    If you just want to add the standby redo log then using this command.
    alter database add standby logfile
    '/<your_path>/redostdby01.log' size 200M reuse,

  • Acrobat 9 Pro Extended: How to activate logfiles?

    Dear members
    Does  someone now how to activate or read out logfiles of a 3DPDF conversion? 
    I would  like to convert Catia V5 data via command line, but cannot find or  activate logs...
    Currently I'm only using the OLE objects "AcroExch.AVDoc" and  "AcroExch.App". In the API reference at livedocs.adobe.com there is  nothing about it.
    Thanks in advance for your help.
    Regards
    hanseat82

    That's clear, we are calling the Acrobat over OLE via Perl, scrap:
    use Win32::OLE;
    $infile = $ARGV[0];
    $outfile = $ARGV[1];
    $NOSAVE = -1;
    $PDSAVEFULL = 1;
    ## Create Objects
    $avdoc_obj = Win32::OLE->new('AcroExch.AVDoc') || die "new: $!";
    $app_obj = Win32::OLE->new('AcroExch.App') || die "new: $!";
    ## Open Inputfile
    $avdoc_obj->Open($infile, $infile);
    ## Get PDF Document as Object from Inputfile
    $pddoc_obj = $avdoc_obj->GetPDDoc();
    And I found the Java code from you, Adobe:
    #            catch (ConversionException e)
    #                System.out.println(e.getErrorCode());
    #                System.out.println(e.getConversionLog());
    Can you give me name me the import module of class "ConversionException" or in general how to go on with that?
    Thanks and regards
    hanseat

  • How to include a button in report header like rowspan? &logfile generation?

    I am really new to this form and I have some questions on the APEX HTML DB:
    The project I need to work on is like this: Based on some criteria, I need to do a database lookup. Then in the result, I need to be able to edit the individual record. So far it is no problem. Here comes the part that I am not sure how to handle or if it can be handled.
    1.     We need to have the ability to copy down certain columns value to selected rows. Therefore, a "copy down" button needs to be included right under the column header cell. For example, based on certain criteria, the following product information is returned: product description, serial number, price, category etc. The “COPY DOWN” button needs to be listed right under the “serial number” table header and before the first row of the result, like “rowspan” in html table header. Once you click on “copy down”, the first rows’s serial number will be copied to all selected rows “serial number”. – Can a button be put right under a column header? If so, can I even reference the cell value in javascript?
    2.     Since we are doing the batch update, I need to have the ability to maintain a logfile to include date and time and what information is modified. – Can I generate a logfile from APEX HTML DB?
    I am not sure APEX HTML DB is a good candidate for the above two tasks.
    Your help is greatly appreciated.

    Hi user572980,
    Welcome to APEX... the more you'll do with it, the more you'll like it.
    1) Are you using a Tabbed Form? Or are you in a report? I try to get a better idea what you're trying to do. Did you already have a look at the templates? You can have a template for the report for ex., in that you can adapt like you wish (in your case put a button under the column header).
    You can also reference the cell values, but for that I should know where you're in (form, report). When you click right on the page and have a look at Page Source you see what item (reference) it is.
    2) You can make a logfile yes. Are you using packages to do the batch update? In that you can make some code to store the history. In otherwords, out-of-the-box in APEX I don't think it exists, but with PLSQL you can do it (so also in APEX). For ex. the plsql package stores it in a history table and you built a report on top of that.
    Dimitri

  • Oracle initialization in progress due to logfile corruption - startup error

    Hi All!
    I m using Oracle Release 9.2.0.1.0. Due to power outage, it seems that one of its Redo files is corrupt and it is not getting started. My database is running in no archive mode and I donot have any backup for my data.
    I have performed the following action but in vain. Please help me to get it started.
    Thanks in advance.
    Muhammad Bilal
    SQL*Plus: Release 9.2.0.1.0 - Production on Mon Jan 4 19:22:16 2010
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    SQL> show user
    USER is "SYS"
    SQL> shutdown immediate;
    ORA-01109: database not open
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 135338868 bytes
    Fixed Size 453492 bytes
    Variable Size 109051904 bytes
    Database Buffers 25165824 bytes
    Redo Buffers 667648 bytes
    Database mounted.
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 88880 change 182882946 time 01/04/2010
    08:33:19
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> select group#,sequence#,archived,status from v$log;
    GROUP# SEQUENCE# ARC STATUS
    1 911 NO CURRENT
    2 909 NO INACTIVE
    3 910 NO INACTIVE
    SQL> alter database clear logfile group 1;
    alter database clear logfile group 1
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> alter database clear unarchived logfile group 1;
    alter database clear unarchived logfile group 1
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 1;
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 1
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE 'D:\ORACLE\ORADATA\DB\REDO01.LOG';
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> alter system switch logfile;
    alter system switch logfile
    ERROR at line 1:
    ORA-01109: database not open
    SQL> ALTER DATABASE OPEN RESETLOGS;
    ALTER DATABASE OPEN RESETLOGS
    ERROR at line 1:
    ORA-01139: RESETLOGS option only valid after an incomplete database recovery
    SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE 'D:\ORACLE\ORADATA\DB\REDO01.LOG';
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO01.LOG';
    alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> select member,status from v$logfile;
    MEMBER STATUS
    D:\ORACLE\ORADATA\DB\REDO03.LOG STALE
    D:\ORACLE\ORADATA\DB\REDO02.LOG
    D:\ORACLE\ORADATA\DB\REDO01.LOG
    SQL> alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO03.LOG';
    Database altered.
    SQL> alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO01.LOG';
    alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO02.LOG';
    Database altered.
    SQL> recover database until cancel;
    ORA-00279: change 182763162 generated at 01/03/2010 20:00:21 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORA92\RDBMS\ARC00911.001
    ORA-00280: change 182763162 for thread 1 is in sequence #911
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    CANCEL
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\DB\SYSTEM01.DBF'
    ORA-01112: media recovery not started
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\DB\SYSTEM01.DBF'
    SQL> recover database;
    ORA-00283: recovery session canceled due to errors
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 88880 change 182882946 time 01/04/2010 08:33:19
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> recover database until cancel;
    ORA-00279: change 182882944 generated at 01/04/2010 08:33:10 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORA92\RDBMS\ARC00911.001
    ORA-00280: change 182882944 for thread 1 is in sequence #911
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    AUTO
    ORA-00308: cannot open archived log 'D:\ORACLE\ORA92\RDBMS\ARC00911.001'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-00308: cannot open archived log 'D:\ORACLE\ORA92\RDBMS\ARC00911.001'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\DB\SYSTEM01.DBF'
    SQL> select group#,sequence#,archived,status from v$log;
    GROUP# SEQUENCE# ARC STATUS
    1 911 NO CURRENT
    2 0 NO UNUSED
    3 0 NO UNUSED
    SQL> alter system switch logfile;
    alter system switch logfile
    ERROR at line 1:
    ORA-01109: database not open
    SQL> ALTER SYSTEM CHECKPOINT GLOBAL;
    ALTER SYSTEM CHECKPOINT GLOBAL
    ERROR at line 1:
    ORA-01109: database not open
    SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
    ORA-00279: change 182763162 generated at 01/03/2010 20:00:21 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORA92\RDBMS\ARC00911.001
    ORA-00280: change 182763162 for thread 1 is in sequence #911
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    CANCEL
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\DB\SYSTEM01.DBF'
    ORA-01112: media recovery not started
    SQL> ALTER DATABASE OPEN RESETLOGS;
    ALTER DATABASE OPEN RESETLOGS
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\DB\SYSTEM01.DBF'
    ---------------------------------------------------------------------------------------------------------------------------------

    Hi Bilal,
    1)take a trace of controlfile..the file will be in udump destination
    SQL>alter database backup controlfile to trace;
    *2)take the whole database cold backup...IMMEDIATLY*
    3) bring another pc install the same version oracle software on it
    4)copy datafiles,parameterfile.listener.ora,tnsnames.ora from the backup
    5)Edit the parameter file and do the necessary changes
    -control files new location
    -database new name
    -backgrounddump destination new location
    -userdump destination new location
    -coredump destination new location
    And save the parameterfile as init<SID>.ora and copy it to ORACLE_HOME\database directory.
    5)edit the trace file u got in step one and remove everything above
    STARTUP NOMOUNT
    and below
    CHARACTER SET XXXXXXX
    Changes should be done to paths of the datafiles and logfiles(as per physical structure of new database) change reuse to set ,new database name and noresetlogs to resetlogs in that tracefile as we are not using the logs from the source database.
    EG:CREATE CONTROLFILE SET DATABASE *"DG9A"* RESETLOGS
    and save that file as create_ct.sql
    5)Create a oracle service using ora dim utility from command prompt.
    c:\ oradim -new -sid SIDNAME -intpwd fbifbi -startmode auto -pfile d:\oracle\ora81\database\initSID.ora--->what ever the name u gave in parameter file and according to your environmetn
    6)do the changes in the listener.ora and tnsnames.ora as per the new machine
    5)set oracle sid and log into sql
    c:\set oracle_sid=sidname
    c:\sqlplus “/as sysdba”
    SQL>@create_ct.sql
    then open database with
    SQL>alter database open resetlogs;
    check google if ur confused in creating control file and oracle service check for cold database cloning
    Hope you will recover
    Regards
    Edited by: hungry_dba on Jan 5, 2010 9:34 AM

  • Problem to send result from log file, the logfile is to large

    Hi SCOM people!
    I have problem when monitoring a log file on a Red Hat system, I get a alert that tells me that the log file is too large to send (see the alert context below).I guess that the problem is that the server logs to much between the 5 minutes that SCOM checks.
    Any ideas how to solve this?
    Date and Time: 2014-07-24 19:50:24
    Log Name: Operations Manager
    Source: Cross Platform Modules
    Event Number: 262
    Level: 1
    Logging Computer: XXXXX.samba.net
    User: N/A
     Description:
    Error scanning logfile / xxxxxxxx / server.log on values ​​xxxxx.xxxxx.se as user <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser>; The operation succeeded and cannot be reversed but the result is too large to send.
    Event Data:
    < DataItem type =" System.XmlData " time =" 2014-07-24T19:50:24.5250335+02:00 " sourceHealthServiceId =" 2D4C7DFF-BA83-10D5-9849-0CE701139B5B " >
    < EventData >
      < Data > / xxxxxxxx / server.log </ Data >
      < Data > ​​xxxxx.xxxxx.se </ Data >
      < Data > <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser> </ Data >
      < Data > The operation succeeded and cannot be reversed but the result is too large to send. </ Data >
      </ EventData >
      </ DataItem >

    Hi Fredrik,
    At any one time, SCX can return 500 matching lines. If you're trying to return > 500 matching lines, then SCX will throttle your limit to 500 lines (that is, it'll return 500 lines, note where it left off, and pick up where it left off next time log files
    are scanned).
    Now, be aware that Operations Manager will "cook down" multiple regular expressions to a single agent query. This is done for efficiency purposes. What this means: If you have 10 different, unrelated regular expressions against a single log file, all of
    these will be "cooked down" and presented to the agent as one single request. However, each of these separate regular expressions, collectively, are limited to 500 matching lines. Hope this makes sense.
    This limit is set because (at least at the time) we didn't think Operations Manager itself could handle a larger response on the management server itself. That is, it's not an agent issue as such, it's a management server issue.
    So, with that in mind, you have several options:
    If you have separate RegEx expressions, you can reconfigure your logging (presumably done via syslog?) to log your larger log messages to a separate log file. This will help "cook down", but ultimately, the limit of 500 RegEx results is still there; you're
    just mitigating cook down.
    If a single RegEx expression is matching > 500 lines, there is no workaround to this today. This is a hardcoded limit in the agent, and can't be overridden.
    Now, if you're certain that your regular expression is matching < 500 lines, yet you're getting this error, then I'd suggest contacting Microsoft Support Services to open an RFC and have this issue escalated to the product team. Due to a logging issue
    within logfilereader, I'm not certain you can enable tracing to see exactly what's going on (although you could use command line queries to see what's happening internally). This is involved enough where it's best to get Microsoft Support involved.
    But as I said, this is only useful if you're certain that your regular expression is matching < 500 lines. If you are matching more than this, this is a known restriction today. But with an RFC, even that could at least be evaluated to see exactly the
    load > 500 matches will have on the management server.
    /Jeff

  • How can i download logfile in session file?

    Hi gurus ,
               can give me code for download of error records stored in  logfile?

    Hi satheesh,
                   There is no option in sm35 to download ,so we use program to download by using session name in selection RSBDCLOG or use We have the tables BDCLD and BDCLM, which will capture the log details inthe session. Firstly, sesssion should be processed. After that log will be created. Then caputure the information into an internal table using BDCLM and BDCLD.

  • Replicat process abending with no error in logfile

    Hi,
    I m trying to replication from 11g to 10g on the same physical host. Below the replicat abending but unable to find the source of error in view GGSEVT logfile.
    GGSCI (rhel5.4_prod) 8> info all
    Program     Status      Group       Lag at Chkpt  Time Since Chkpt
    MANAGER     RUNNING                                          
    EXTRACT     RUNNING     EXTLOCAL    00:00:00      00:00:07   
    REPLICAT    ABENDED     REPLOCAL    00:00:00      18:12:27
    2013-04-16 09:58:40  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
    2013-04-16 09:58:41  INFO    OGG-00983  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager started (port 7809).
    2013-04-16 09:58:45  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start extract extlocal.
    2013-04-16 09:58:45  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host rhel5.4_prod (START EXTRACT EXTLOCAL ).
    2013-04-16 09:58:45  INFO    OGG-00975  Oracle GoldenGate Manager for Oracle, mgr.prm:  EXTRACT EXTLOCAL starting.
    2013-04-16 09:58:45  INFO    OGG-00992  Oracle GoldenGate Capture for Oracle, extlocal.prm:  EXTRACT EXTLOCAL starting.
    2013-04-16 09:58:45  INFO    OGG-03035  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Operating system character set identified as UTF-8. Locale: en_US, LC_ALL:.
    2013-04-16 09:58:46  INFO    OGG-03500  Oracle GoldenGate Capture for Oracle, extlocal.prm:  WARNING: NLS_LANG environment variable does not match database character set, or not set. Using database
    character set value of US7ASCII.
    2013-04-16 09:58:46  INFO    OGG-01815  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Virtual Memory Facilities for: BR
        anon alloc: mmap(MAP_ANON)  anon free: munmap
        file alloc: mmap(MAP_SHARED)  file free: munmap
        target directories:
        /u05/GG/BR/EXTLOCAL.
    2013-04-16 09:58:46  INFO    OGG-01815  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Virtual Memory Facilities for: COM
        anon alloc: mmap(MAP_ANON)  anon free: munmap
        file alloc: mmap(MAP_SHARED)  file free: munmap
        target directories:
        /u05/GG/dirtmp.
    2013-04-16 09:58:46  INFO    OGG-01513  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Positioning to Sequence 22, RBA 18459664, SCN 0.1177097.
    2013-04-16 09:58:46  INFO    OGG-01516  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Positioned to Sequence 22, RBA 18459664, SCN 0.1177097, Apr 15, 2013 4:21:50 PM.
    2013-04-16 09:58:46  INFO    OGG-00993  Oracle GoldenGate Capture for Oracle, extlocal.prm:  EXTRACT EXTLOCAL started.
    2013-04-16 09:58:46  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from EXTRACT on host rhel5.4_prod (START SERVER CPU -1 PRI -1  TIMEOUT 300 PARAMS ).
    2013-04-16 09:58:46  INFO    OGG-01677  Oracle GoldenGate Collector for Oracle:  Waiting for connection (started dynamically).
    2013-04-16 09:58:46  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from SERVER on host localhost.localdomain (REPORT 4714 7819).
    2013-04-16 09:58:46  INFO    OGG-00974  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager started collector process (Port 7819).
    2013-04-16 09:58:46  INFO    OGG-01228  Oracle GoldenGate Collector for Oracle:  Timeout in 300 seconds.
    2013-04-16 09:58:51  INFO    OGG-01226  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Socket buffer size set to 27985 (flush size 27985).
    2013-04-16 09:58:51  INFO    OGG-01229  Oracle GoldenGate Collector for Oracle:  Connected to rhel5.4_prod:11890.
    2013-04-16 09:58:51  INFO    OGG-01669  Oracle GoldenGate Collector for Oracle:  Opening /u05/GG/dirdat/aa000000 (byte -1, current EOF 1145).
    2013-04-16 09:58:51  INFO    OGG-01670  Oracle GoldenGate Collector for Oracle:  Closing /u05/GG/dirdat/aa000000.
    2013-04-16 09:58:51  INFO    OGG-01055  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Recovery initialization completed for target file /u05/GG/dirdat/aa000000, at RBA 1145.
    2013-04-16 09:58:51  INFO    OGG-01478  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Output file /u05/GG/dirdat/aa is using format RELEASE 11.2.
    2013-04-16 09:58:51  INFO    OGG-01669  Oracle GoldenGate Collector for Oracle:  Opening /u05/GG/dirdat/aa000000 (byte 1145, current EOF 1145).
    2013-04-16 09:58:51  INFO    OGG-01735  Oracle GoldenGate Collector for Oracle:  Synchronizing /u05/GG/dirdat/aa000000 to disk.
    2013-04-16 09:58:51  INFO    OGG-01735  Oracle GoldenGate Collector for Oracle:  Synchronizing /u05/GG/dirdat/aa000000 to disk.
    2013-04-16 09:58:51  INFO    OGG-01670  Oracle GoldenGate Collector for Oracle:  Closing /u05/GG/dirdat/aa000000.
    2013-04-16 09:58:51  INFO    OGG-01026  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Rolling over remote file /u05/GG/dirdat/aa000001.
    2013-04-16 09:58:51  INFO    OGG-01669  Oracle GoldenGate Collector for Oracle:  Opening /u05/GG/dirdat/aa000001 (byte -1, current EOF 0).
    2013-04-16 09:58:51  INFO    OGG-01053  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Recovery completed for target file /u05/GG/dirdat/aa000001, at RBA 1018.
    2013-04-16 09:58:51  INFO    OGG-01057  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Recovery completed for all targets.
    2013-04-16 09:58:51  INFO    OGG-01517  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Position of first record processed Sequence 22, RBA 18459664, SCN 0.1177097, Apr 15, 2013 4:21:50 PM.
    2013-04-16 09:58:51  INFO    OGG-00732  Oracle GoldenGate Capture for Oracle, extlocal.prm:  Found crash recovery marker from thread #1 on sequence 23 at RBA 1040. Aborting uncommitted transactions
    2013-04-16 09:58:56  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start replicat replocal.
    2013-04-16 09:58:56  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host rhel5.4_prod (START REPLICAT REPLOCAL ).
    2013-04-16 09:58:56  INFO    OGG-00975  Oracle GoldenGate Manager for Oracle, mgr.prm:  REPLICAT REPLOCAL starting.
    2013-04-16 09:58:56  INFO    OGG-00995  Oracle GoldenGate Delivery for Oracle, replocal.prm:  REPLICAT REPLOCAL starting.
    2013-04-16 09:58:56  INFO    OGG-03035  Oracle GoldenGate Delivery for Oracle, replocal.prm:  Operating system character set identified as UTF-8. Locale: en_US, LC_ALL:.
    2013-04-16 09:58:56  INFO    OGG-01815  Oracle GoldenGate Delivery for Oracle, replocal.prm:  Virtual Memory Facilities for: COM
        anon alloc: mmap(MAP_ANON)  anon free: munmap
        file alloc: mmap(MAP_SHARED)  file free: munmap
        target directories:
        /u05/GG/dirtmp.
    2013-04-16 09:58:56  INFO    OGG-00996  Oracle GoldenGate Delivery for Oracle, replocal.prm:  REPLICAT REPLOCAL started.Configuration
    GGSCI (rhel5.4_prod) 9> view params mgr
    PORT 7809
    USERID ggs_owner, PASSWORD ggs_owner
    PURGEOLDEXTRACTS /u05/GG/dirdat/ex, USECHECKPOINTS
    GGSCI (rhel5.4_prod) 10> view params extlocal
    extract extlocal
    userid ggs_owner, password ggs_owner
    setenv (ORACLE_HOME="/u03/app/oracle/product/11.2.0/db_1")
    setenv (ORACLE_SID="PROD11G")
    rmthost 192.168.1.9, mgrport 7809
    rmttrail /u05/GG/dirdat/aa
    TABLE TESTUSER.*;
    GGSCI (rhel5.4_prod) 11> view params replocal
    REPLICAT replocal
    SETENV (ORACLE_HOME="/u02/app10g/oracle10g/product/10.2.0/db_1")
    SETENV (ORACLE_SID="PROD10G")
    SETENV (NLS_LANG="AMERICAN_AMERICA.US7ASCII")
    ASSUMETARGETDEFS
    USERID ggs_owner, PASSWORD ggs_owner
    MAP TESTUSER.*, TARGET TESTUSER.*;

    Hi,
    Not sure if you have created a definition file. Try if your source and target has a mismatch. Also, find below some additional parameters which will help in sorting out the issue:
    DISCARDFILE :
    Valid for Extract and Replicat
    Use the DISCARDFILE parameter to generate a discard file to which GoldenGate can log records that it cannot process. Records can be discarded for several reasons. For example,
    a record is discarded if the underlying table structure changed since the record was written to the trail. You can use the discard file to help you identify the cause of processing errors. Each entry in the discard file contains the discarded record buffer and an error code indicating the reason. GoldenGate creates the specified discard file in the dirrpt subdirectory of the GoldenGate installation directory. You can view it with a text editor or by using the following command in GGSCI.
    VIEW REPORT
    Where: is the fully qualified name of the discard file.
    To prevent having to perform manual maintenance of discard files, use either the PURGE or APPEND option. Otherwise, you must specify a different discard file name before starting
    each process run, because GoldenGate will not write to an existing discard file. To set an upper limit for the size of the file, use either the MAXBYTES or MEGABYTES option. If
    the specified size is exceeded, the process will abend. Default By default, GoldenGate does not generate a discard file.
    DISCARDROLLOVER:
    Valid for Extract and Replicat
    Use the DISCARDROLLOVER parameter to set a schedule for aging discard files. For long or continuous runs, setting an aging schedule prevents the discard file from filling up and
    causing the process to abend, and it provides a predictable set of archives that can be included in your archiving routine.
    When the DISCARDROLLOVER age point is reached, a new discard file is created, and old files are renamed in the format of ., where:
    ? is the name of the Extract or Replicat group
    ? is a number that gets incremented by one each time a new file is created, for
    example: myext0.dsc, myext1.dsc, myext2.dsc, and so forth.
    You can specify a time of day, a day of the week, or both. Specifying just a time of day (AT option) without a day of the week (ON option) generates a discard file at the specified time every day.
    Default Disabled. No rules specified.
    REPERROR:
    Use REPERROR to specify an error and a response that together control how Replicat responds to the error when executing the MAP statement. You can use REPERROR at the MAP level to override and supplement global error handling rules set with the REPERROR parameter. Multiple REPERROR statements can be applied to the same MAP statement to
    enable automatic, comprehensive management of errors and interruption-free replication processing.
    DEFAULT Sets a global response to all errors except those for which explicit REPERROR statements are specified.
    GETDELETES | IGNOREDELETES:
    Valid for Extract and Replicat
    Use the GETDELETES and IGNOREDELETES parameters to control whether or not GoldenGate processes delete operations. These parameters are table-specific. One parameter remains in effect for all subsequent TABLE or MAP statements, until the other parameter is encountered.
    GETUPDATES | IGNOREUPDATES:
    Valid for Extract and Replicat
    Use the GETUPDATES and IGNOREUPDATES parameters to control whether or not GoldenGate processes update operations. The parameters are table-specific. One parameter remains in effect for all subsequent TABLE or MAP statements, until the other parameter is encountered.
    GETINSERTS | IGNOREINSERTS:
    Valid for Extract and Replicat
    Use the GETINSERTS and IGNOREINSERTS parameters to control whether or not insert operations are processed by GoldenGate. The parameters are table-specific. One parameter remains in effect for all subsequent TABLE or MAP statements, until the other parameter is encountered.
    Update the parameter file on target as
    edit params <TARGET PARAM FILE>
    REPLICAT rcreator
    SOURCEDEFS /u01/app/oracle/product/goldengate/dirdat/defecreator
    DISCARDFILE /u01/app/oracle/product/goldengate/dirdat/creator_err, purge
    DISCARDROLLOVER ON saturday
    USERID goldengate, PASSWORD AACAAAAAAAAAAAKAPATACEHBIGQGCFZCCDIGAEMCQFFBZHVC, ENCRYPTKEY default
    REPERROR (DEFAULT, DISCARD)
    IGNOREDELETES
    IGNOREUPDATES
    GETINSERTS
    MAP meditate.life, TARGET CONSCIOUSNESS.tenure, &
    COLMAP (PERSON_ID=HUMAN_ID, &
    INITIALNAME=FIRSTNAME, &
    ENDNAME=LASTNAME, &
    BIRTH_DATE=DATE_OF_BITH, &
    AGE_AT_DEATH=AGE_AT_TIME_OF_DEATH, &
    DEED_ID_AT_DEATH=DEED_ID_AT_TIME_OF_DEATH), &
    KEYCOLS (PERSON_ID, INITIALNAME,ENDNAME);
    Now stop and start the replicat on target as:
    GGSCI (goldengate) 9> stop replicat RCREATOR
    Sending STOP request to REPLICAT RCREATOR ...
    Request processed.
    GGSCI (goldengate) 10> info all

  • Could you tell me what's the meaning of the logfile-path and log-level?

    we are running an productive xml database. but it is not stable now. Sometime it would report resource conflict error while you access xmldb via http protocol. I read the database logs and listener logs, but no abnormal message could be found. So, I want to find more information from the xmldb logs. While I run select DBMS_XDB.cfg_get().getclobval() from dual, I found that there are several logfile-path and log-level tags and I guess that these would be xmldb logs. Does anyone know what's the meaning of these tags?

    I wondered about that one often too. I didn't have a chance yet to investigate (but maybe Mark will elaborate a little here), my guess is that it / or will be a possibility to enable tracing regarding the protocols or servlets.
    Though It look like if you enable it it will trace to the XML file defined in the xdbconfig.xml. I also guess that (because there is also a XSD counterpart) that one could create an resource that streams the errors into a XDB ftp or http or ... xmltype table based on these settings.
    This would be great because it would mature the protocol server regarding functionality. You could enable the tracing and see what happens. Until now the documentation doesn't give much extra insight...
    <!-- FTP specific -->
    <element name="ftpconfig">
    <complexType><sequence>
    <element name="ftp-port" type="unsignedShort" default="2100"/>
    <element name="ftp-listener" type="string"/>
    <element name="ftp-protocol" type="string"/>
    <element name="logfile-path" type="string" default="/sys/log/ftplog.xml"/>
    <element name="log-level" type="unsignedInt" default="0"/>
    <element name="session-timeout" type="unsignedInt" default="6000"/>
    <element name="buffer-size" default="8192">
    <simpleType>
    <restriction base="unsignedInt">
    <minInclusive value="1024"/> <!-- 1KB -->
    <maxInclusive value="1048496"/> <!-- 1MB -->
    </restriction>
    </simpleType>
    </element>
    <element name="ftp-welcome-message" type="string" minOccurs="0"
    maxOccurs="1"/>
    </sequence></complexType>
    </element>
    <!-- HTTP specific -->
    <element name="httpconfig">
    <complexType><sequence>
    <element name="http-port" type="unsignedShort" default="8080"/>
    <element name="http-listener" type="string"/>
    <element name="http-protocol" type="string"/>
    <element name="max-http-headers" type="unsignedInt" default="64"/>
    <element name="max-header-size" type="unsignedInt" default="4096"/>
    <element name="max-request-body" type="unsignedInt" default="2000000000"
    minOccurs="1"/>
    <element name="session-timeout" type="unsignedInt" default="6000"/>
    <element name="server-name" type="string"/>
    <element name="logfile-path" type="string"
    default="/sys/log/httplog.xml"/>
    <element name="log-level" type="unsignedInt" default="0"/>
    <element name="servlet-realm" type="string" minOccurs="0"/>
    ...etc...

Maybe you are looking for

  • How do you get rid of white space when you are printing multiple pages to one sheet of paper?

    How do you get rid of extra white space when you are printing multiple pages to one sheet of paper?  When printing multiple pages to one sheet of paper Acrobat won't let you select the "zoom" for printing. Thanks

  • Problem with while loop in thread: starting an audiostream

    Hello guys, I'm doing this project for school and I'm trying to make a simple app that plays a number of samples and forms a beat, baed on which buttons on the screen are pressed, think like fruity loops. But perhaps a screenshot of my unfnished GUI

  • DISPLAY ONLY CUSTOM MESSAGE WITH SRW.MESSAGE

    Hi.I created a report with Report Builder 9i.I also created two form parameters of type date.I put inside the validation trigger in the property inspector the following plsql code. function FROM_DATEValidTrigger return boolean is v_date VARCHAR2(12);

  • Issue with Adobe Air and Rosetta Stone in Mountain Lion.

    After installing Mountain Lion, I tried to open Rosetta Stone.  I was directed to update and install adobe air.  I did that, then tried to open Rosetta Stone again and it said the installation of Adobe Air was damaged.  I went to Adobe to reinstall a

  • Horizontal scroll not working  in aperture split view

    I had recently upgraded to mountain lion and the newest aperture, but I noticed a problem with horizontal scrolling in aperture split view.  I am currently using a magic mouse, and it scrolls just fine within other applications except in aperture spl