Logfiles NSM 3.0 AD

Hi,
I have installed an NSM 3.0 (AD) environment
But some weird things happened when i applied policies on the users and now i want to see whats happened.
Call me stupid.... ;) But I cant find the log files!
The documentation is not helping and in NSMadmin i only find log-settings no actual log files.
Can anyone point me in the right direction?
The Engine is running on a windows 2008 R2 server
Johan

Have you checked the %programdata%\novell\Storage Manager\engine\log
directory?
thanks,
NSM Development
On 10/18/2010 12:06 PM, jkullberg wrote:
>
> Hi,
>
> I have installed an NSM 3.0 (AD) environment
> But some weird things happened when i applied policies on the users and
> now i want to see whats happened.
>
> Call me stupid.... ;) But I cant find the log files!
> The documentation is not helping and in NSMadmin i only find
> log-settings no actual log files.
>
> Can anyone point me in the right direction?
>
> The Engine is running on a windows 2008 R2 server
>
> Johan
>
>

Similar Messages

  • View logfile or view output hangs in 3 node environment

    Hi All,
    Briefing the environment:
    Database -- ERPDB001
    Conc + admin -- APPS001
    Forms + web -- APPS002
    Some times when we try to see view log or view output after running the Concurrent requests. The system seems like hanging and it will display following error after. If you check in the next minute, it will show the output/logfile. looks like a strange behavior
    "An error occurred while attempting to establish an Application File Server connection with the node APPS001. There may be a network configuration problem, or the TNS listener on node APPS001 may not be running, Please contact your system administrator."
    Is this the Oracle Apps issue or the network issue between two node.
    Regards
    Vasu

    05-JUL-2008 09:51:56 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST
    =<APPS002IP>)(PORT=56065)) * establish * FNDFS * 0
    05-JUL-2008 09:51:59 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST
    =<APPS002IP>)(PORT=56070)) * establish * FNDFS * 0
    05-JUL-2008 09:52:16 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST
    =<APPS002IP>)(PORT=56087)) * establish * FNDFS * 0
    05-JUL-2008 10:20:28 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=<HostnameofAPPS001>)(US
    ER=applprod))(COMMAND=status)(ARGUMENTS=64)(SERVICE=APPS_PROD)(VERSION=134243072
    )) * status * 0
    05-JUL-2008 10:20:28 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=<HostnameofAPPS001>)(US
    ER=applprod))(COMMAND=stop)(ARGUMENTS=64)(SERVICE=APPS_PROD)(VERSION=134243072))
    * stop * 0
    05-JUL-2008 09:51:56 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=56065)) * establish * FNDFS * 0
    05-JUL-2008 09:51:59 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=56070)) * establish * FNDFS * 0
    05-JUL-2008 09:52:16 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=56087)) * establish * FNDFS * 0
    05-JUL-2008 10:20:28 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=<HostnameofAPPS001>)(USER=applprod))(COMMAND=status)(ARGUMENTS=64)(SERVICE=APPS_PROD)(VERSION=134243072)) * status * 0
    05-JUL-2008 10:20:28 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=<HostnameofAPPS001>)(USER=applprod))(COMMAND=stop)(ARGUMENTS=64)(SERVICE=APPS_PROD)(VERSION=134243072)) * stop * 0
    TNSLSNR for Solaris: Version 8.0.6.3.0 - Production on 05-JUL-2008 10:25:03
    (c) Copyright 1999 Oracle Corporation. All rights reserved.
    Log messages written to /u02/applprod/prodora/8.0.6/network/admin/apps_prod.log
    Listening on: (ADDRESS=(PROTOCOL=tcp)(DEV=10)(HOST=<IPofAPPS001>)(PORT=1676))
    TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
    05-JUL-2008 10:25:03 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=<HostnameofAPPS001>)(USER=applprod))(COMMAND=status)(ARGUMENTS=64)(SERVICE=APPS_PROD)(VERSION=134243072)) * status * 0
    05-JUL-2008 10:25:34 * ping * 0
    05-JUL-2008 10:25:35 * ping * 0
    05-JUL-2008 10:25:35 * (CONNECT_DATA=(SID=FNDSM)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<IPofAPPS001>)(PORT=37849)) * establish * FNDSM * 0
    05-JUL-2008 10:41:53 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=59076)) * establish * FNDFS * 0
    05-JUL-2008 10:47:52 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=59435)) * establish * FNDFS * 0
    05-JUL-2008 11:00:34 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=60186)) * establish * FNDFS * 0
    05-JUL-2008 11:16:33 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=61139)) * establish * FNDFS * 0
    05-JUL-2008 11:33:41 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=62210)) * establish * FNDFS * 0
    05-JUL-2008 11:34:06 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=62236)) * establish * FNDFS * 0
    05-JUL-2008 11:38:47 * 12502
    TNS-12502: TNS:listener received no CONNECT_DATA from client
    05-JUL-2008 11:46:32 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=62977)) * establish * FNDFS * 0
    05-JUL-2008 12:12:37 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=64546)) * establish * FNDFS * 0
    05-JUL-2008 12:12:39 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=64548)) * establish * FNDFS * 0
    05-JUL-2008 12:13:37 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=64609)) * establish * FNDFS * 0
    05-JUL-2008 12:26:42 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=65439)) * establish * FNDFS * 0
    05-JUL-2008 12:26:44 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=65444)) * establish * FNDFS * 0
    05-JUL-2008 12:27:22 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=65485)) * establish * FNDFS * 0
    05-JUL-2008 12:28:38 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=32810)) * establish * FNDFS * 0
    05-JUL-2008 12:30:14 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=32904)) * establish * FNDFS * 0
    05-JUL-2008 12:30:27 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=32918)) * establish * FNDFS * 0
    05-JUL-2008 12:51:17 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=34177)) * establish * FNDFS * 0
    05-JUL-2008 13:47:59 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=37909)) * establish * FNDFS * 0
    05-JUL-2008 13:48:02 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=37914)) * establish * FNDFS * 0
    05-JUL-2008 14:15:10 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=39543)) * establish * FNDFS * 0
    05-JUL-2008 14:15:12 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=39548)) * establish * FNDFS * 0
    05-JUL-2008 14:25:03 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=40157)) * establish * FNDFS * 0
    05-JUL-2008 14:25:07 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=40162)) * establish * FNDFS * 0
    05-JUL-2008 14:25:17 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=39984)) * establish * FNDFS * 0
    05-JUL-2008 14:26:19 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=40236)) * establish * FNDFS * 0
    05-JUL-2008 16:32:50 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=47917)) * establish * FNDFS * 0
    05-JUL-2008 16:32:55 * (CONNECT_DATA=(SID=FNDFS)) * (ADDRESS=(PROTOCOL=tcp)(HOST=<APPS002IP>)(PORT=47925)) * establish * FNDFS * 0
    =========================
    We found not much errors.
    One time the error is
    "05-JUL-2008 11:38:47 * 12502
    TNS-12502: TNS:listener received no CONNECT_DATA from client"
    But I used to get above Original error (Core issue) lot of times in the Application.
    Even users are complaining about this.
    Thanks and Regards
    Vasu

  • Linux logfile monitoring does not work after using "privileged datasource"

    Hello!
    I have noticed a strange behaviour on one of my Linux Agents (lets call it server_a) regarding logfile monitoring with the "Microsoft.Unix.SCXLog.Datasource" and the "Microsoft.Unix.SCXLog.Privileged.Datasource".
    After successfully testing monitoring of /var/log/messages on server_a with the "Privileged Datasource". This test has been on server_a and the MP containing this rule has been delete from the management gorup before the following tests.
    I wanted to test another logfile (lets call it logfile_a) using the normal datasource "Microsoft.Unix.SCXLog.Datasource" on server_a. So I created the usual logfile rule (rule_a) in XML (which I have done countless times before) for monitoring
    logfile_a. Logfile_a has been created by the "Linux Action Account User" with reading rights for everyone. After importing the Management Pack with the monitoring  for logfile_a I had the following warning alert in the scom console managing
    server_a:
      Fehler beim Überprüfen der Protokolldatei "/home/ActionAccountUser/logfile_a" auf dem Host "server_a" als Benutzer "<SCXUser><UserId>ActionAccountUser</UserId><Elev></Elev></SCXUser>";
    An internal error occurred.  (the userid has been changed to keep the anonimity of our action account).
    To make sure I did not make any mistakes in the XML i have created a new logfile rule (rule_b) monitoring "logfile_b" on "server_a" using the "Logfile Template" under the authoring tab. logfile_b was also created by the "Linux
    Action Account User" and had reading rights for everyone. Unfortunately this logfile rule created the same error:
      Fehler beim Überprüfen der Protokolldatei "/home/ActionAccountUser/logfile_b" auf dem Host "server_a" als Benutzer "<SCXUser><UserId>ActionAccountUser</UserId><Elev></Elev></SCXUser>";
    An internal error occurred.  (the userid has been changed to keep the anonimity of our action account).
    Although both rules (rule_a and rule_b) used the "Microsoft.Unix.SCXLog.Datasource" which uses the Action Account for monitoring logfiles, the above error looks to me as SCOM wants to use the privileged user, which in this case it not necessary
    as the Action Account can read logfile_a and logfile_b without any problems.
    So after a few unsuccessfull tries to get both rules to raise an alert I tried to use the "Microsoft.Unix.SCXLog.Privileged.Datasource" for rule_a as last resort. Then suddenly after importing the updated Management Pack I finally received the
    alert I desperately waited for this whole time.
    Finally after lot of text here are my questions:
    Could it be that the initial test with the Privileged Log Datasource somehow screwed up the agent on server_a so it could not monitor logfiles with the standard log datasource? Or may anyone of you might have an idea what went wrong here.
    Like I said both logfile could be accessed and changed by the normal Action Account without any problems. So privileged right are not needed. I even restarted the scom agent in case something hanged.
    I hope I could make the problm clear to you. If not, don´t hesitate to ask any questions.
    Thank you and kind regards,
    Patrick

    Hello!
    After all that text, I fogrot the most essential information..
    We are currently using OpsMgr 2012 SP1 UR4, the monitored server (server_a) has agent version 1.4.1-292 installed.
    Thanks for the explanation of how the logprovider works. I tried to execute the logfilereader just to see if there are any errors and everything looks fine to me:
    ActionAccount @server_a:/opt/microsoft/scx/bin> ./scxlogfilereader -v
    Version: 1.4.1-292 (Labeled_Build - 20130923L)
    Here are the latest entry in the scx.log file:
    * Microsoft System Center Cross Platform Extensions (SCX)
    * Build number: 1.4.1-292 Labeled_Build
    * Process id: 23186
    * Process started: 2014-03-31T08:29:09,136Z
    * Log format: <date> <severity>     [<code module>:<process id>:<thread id>] <message>
    2014-03-31T08:29:09,138Z Warning    [scx.logfilereader.ReadLogFile:23186:140522274359072] scxlogfilereader - Unexpected exception: Could not find persisted data: Failed to access filesystem item /var/opt/microsoft/scx/lib/state/ActionAccount/LogFileProvider__ActionAccount_shome_sActionAccount_slogfilewithoutsudo.txtEDST02
    2014-03-31T08:29:09,138Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] LogFileProvider InvokeLogFileReader - Exception: Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4
    2014-03-31T08:29:09,138Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] BaseProvider::InvokeMethod() - Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4 - [/home/serviceb/ScxCore_URSP1_SUSE_110_x64/source/code/providers/logfile_provider/logfileprovider.cpp:442]
    * Microsoft System Center Cross Platform Extensions (SCX)
    * Build number: 1.4.1-292 Labeled_Build
    * Process id: 23284
    * Process started: 2014-03-31T08:30:06,139Z
    * Log format: <date> <severity>     [<code module>:<process id>:<thread id>] <message>
    2014-03-31T08:30:06,140Z Warning    [scx.logfilereader.ReadLogFile:23284:140016517941024] scxlogfilereader - Unexpected exception: Could not find persisted data: Failed to access filesystem item /var/opt/microsoft/scx/lib/state/ActionAccount/LogFileProvider__ActionAccount_shome_sActionAccount_stest.txtEDST02
    2014-03-31T08:30:06,142Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] LogFileProvider InvokeLogFileReader - Exception: Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4
    2014-03-31T08:30:06,143Z Warning    [scx.core.providers.logfileprovider:5209:140101980321536] BaseProvider::InvokeMethod() - Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4 - [/home/serviceb/ScxCore_URSP1_SUSE_110_x64/source/code/providers/logfile_provider/logfileprovider.cpp:442]
    Strangely I could not acces the "Action Account User" directory under /var/opt/microsoft/scx/log as "ActionAccount" user. Is it ok for the directory to have the following rights:  drwx------ 2 1001 users? Instead of "1001" it should say "ActionAccount",
    right?
    This could be a bit far fetched, but perhaps the logfile provider can´t access logfiles as the "ActionAccount" on this server because it needs to write in the scx.log file. But as the "ActionAccount" can´t access the file, the logfile provider throws
    an error. And as "Privileged Account" the rule works flawlessly, as the logfile provider running in root context can access everything.
    Don´t know if that makes sense, but right now it sounds logical to me.

  • Logfile Generation utilizing "Excel" (Creating and Appending Report)

    All,
    As always, thanks for the help you have given me in the past....especially the Vets. I have tried to figure out a solution to my issue from the message board, but no solution seems to fit what I am doing.
    Here is my situation...... I am using Labview to test my product one unit at a time. I have always used Teststand and report generation from there, but this time it is strictly Labview. This is my first attempt to create a logfile with Excel that appends one xls file everytime one unit is tested.
    The way my test is set up now, I test and collect the data in an array for when I created the logfile generation VI. I took several stabs at it, looked at examples, but cant figure out the direction I need to go to create this. Here is the parameteres necessary for the logfile (spreadsheet).
    -All UUT's will go into one spreadsheet and the spreadsheet will be appended by adding new data in next available row.
    -Data is imported to spreadsheet in array format.
    -Test data that passes will be green, test data that fails will be red (I can figure this out, but this is why I need to use Excel)
    -I want to use Excel so I have more flexibility for graphs and things of that nature in the future.
    It seems rather simple, but not for me.....lol. If I go to the Report Generation Toolkit, i  see "Create Report" and "Append Report"....but Append Report still wants the "report input" node wired. What do I wire that to? For example, if I have an excel spreadsheet called hangover.xls, do I somehow wire hangover.xls to the input? I am having trouble finding answers. I would really appreciate a simple JPG or VI so I can understand the setup for what I want to do.
    Comments and links to threads/help appreciated!
    Ryan

    Hi Evan,
    Thanks for the other examples....I thought I was going to be able to manipulate them into what I want, but ended up spending about 6 hours playing with it and up to 2am. I am getting so frustrated with this. This is new ground for me, I never have experimented with logfile creation. I am sorry to keep bothering you with this but I am ready to pull my hair out. I attached a couple Vi's....Spreadsheet import is the main VI and report.vi is the sub.....i need to rename them better but haven't got there.
    First off, that VI you posted that I couldn't open, could you just take a JPG of the block diagram? That would really help.
    I need to create a spreadsheet with logfile data in rows. The spreadsheet is to be appended for each unit under test. Each unit under test gets one row and all data is written at the end of the test. If you look at the spreadsheet_import.vi, I am basically taking a bunch of 1D arrays of data to create one long 1D array for one row.
    Every month a new spreadsheet is created (so log file data is divided into months) , and that is what the report.vi does....it looks to see if the filename is already created and if not, sends a boolean to the write to spreadsheet file to append. I reverted to "write to spreadsheet" because for the life of me, I cannot figure out how to use the worksheet portion to do this. I would think this should be pretty simple, but I cannot figure out and its not for lack of trying.
     If I use "write to spreadsheet", I am going to run into problems because I ultimately want to use a excel template with formulas but if I can figure it out, this will have to do.
    All I really want to do is to create a spreadsheet if one doesnt exist or append if it does, combine all my 1d array data, and create one row with this data. The other issue I ran into before is I cant figure out how to tell Excel where the next row is.......UUGHHHH! This is definitely stressing me out as I have a deadline and I will gladly sent a case of beer to Norway for the help received.
    Dying Here,
    Ryan
    Attachments:
    Spreadsheet_import.vi ‏14 KB
    report.vi ‏33 KB

  • NSM Engine Start and Stops

    I have installed NSM 3.0.4.4 on a SLES 11 SP1, OES 11 SP1. After doing battle with certificates and other issues, I had finally got it up and running. The problem comes when I attempt to complete the migration process from the previous Netware server. The NSM engine stops running and shows its running state as unused. Restarting the engine via nsmengine-config starts the engine, but it inevitably stops. I have been able to get it to start and continue to run by renaming the /var/opt/novell/storagemanager/engine/data folder and creating an empty one, which triggers a new setup wizard.
    I resigned myself to recreating the policies, so I ran through the wizard and did not import the old policies. I left the GSR Collecter running overnight and found that the engine had once again stop and I am unable to get it to stay up. I have included the last lines of the nsmengined.log. I am at my wits end.
    Thank You,
    Kenneth
    01 2012-08-07 08:00:02 -14400 3 0001 30152 7ff78be7c710 ML: Called LoadStorageResourcesFromCache(), Result = 45.
    01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called LoadStorageResourcesFromDS(false), bResult = 1.
    01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called LoadAgentManager(), Result = 0.
    01 2012-08-07 08:00:02 -14400 5 0006 30152 7ff78be7c710 AM: Starting Delegation Subsystem...
    01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called AgentManager().ResumeDelegating(), Result = 0.
    01 2012-08-07 08:00:02 -14400 5 8008 30152 7ff7896a2710 CCStorageResources::WorkerThreadFunction() - Rebuild is in initial state.
    01 2012-08-07 08:00:02 -14400 5 0009 30152 7ff78be7c710 Sched: Attempted initial Database initialization, Result = 0.
    01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called LoadScheduler(), Result = 0.
    01 2012-08-07 08:00:02 -14400 5 0009 30152 7ff78be7c710 Sched: Starting Scheduler Subsystem...
    01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called Scheduler().ResumeSchedules(), Result = 0.
    01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called LoadEventMonitorListFromCache(), Result = 0.
    01 2012-08-07 08:00:02 -14400 5 0002 30152 7ff78be7c710 ML: Loading saved SR state information...
    01 2012-08-07 08:00:02 -14400 5 0002 30152 7ff78be7c710 GL: Failed to load the SR state information, nResult = 53.
    01 2012-08-07 08:00:02 -14400 5 0002 30152 7ff78be7c710 ML: Loading SR proxy information...
    01 2012-08-07 08:00:02 -14400 5 0002 30152 7ff78be7c710 GL: Attempted to load the SR proxy information, nResult = 0.
    01 2012-08-07 08:00:02 -14400 5 0004 30152 7ff7763b3710 PC: No policies currently available
    01 2012-08-07 08:00:02 -14400 5 0009 30152 7ff7773b5710 Sched: Loaded 0 tasks scheduled for the next 57598 seconds, Result = 0.
    01 2012-08-07 08:00:02 -14400 5 0009 30152 7ff7773b5710 Sched, ST: No Scheduled tasks set to execute today. Scheduled Tasks for tomorrow will be loaded in 57598 seconds.
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Policy Cache Rebuild complete.
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Rebuilding Managed Path Cache...
    01 2012-08-07 08:00:03 -14400 5 0004 30152 7ff78be7c710 MPC: Loading 0 path entries into cache...
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Managed Path Cache Rebuild complete.
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Rebuilding Event Cache...
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Event Cache Rebuild complete.
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Rebuilding Event Processor Data...
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Event Processor Data Rebuild complete.
    01 2012-08-07 08:00:03 -14400 5 0002 30152 7ff78be7c710 GL: Base schema appears to be properly extended.
    01 2012-08-07 08:00:03 -14400 5 0002 30152 7ff78be7c710 GL: Action Object schema appears to be extended.
    01 2012-08-07 08:00:03 -14400 5 0002 30152 7ff78be7c710 GL: Legacy Collaborative Homedirectory attribute cccFSFactoryGroupHomedir is available.
    01 2012-08-07 08:00:03 -14400 5 0002 30152 7ff78be7c710 GL: Legacy Quota Manager attribute cccFSFactoryHomeDirectoryQuota is available.
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Starting the Task Manager subsystem...
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Completed starting the Task Manager subsystem, bResult = 1.
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Starting the HTTPx Server subsystem...
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Completed starting the HTTPx Server subsystem, bResult = 1.
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Starting Incoming Event Parser subsystem...
    01 2012-08-07 08:00:03 -14400 2 0001 30152 7ff78be7c710 ML: Completed initialization. Engine is now running.
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 WD: Engine mainline thread is entering the Watch Dog function...
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 WD: Engine is running...
    01 2012-08-07 08:00:03 -14400 3 0001 30152 7ff78be7c710 WD: Detected that the Incoming Event Parser has terminated.
    01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Starting Incoming Event Parser subsystem...
    01 2012-08-07 08:00:03 -14400 5 8006 30152 7ff75fb86710 TM: TaskManagerThread() - Thread has started.

    On 8/7/2012 8:56 AM, krite wrote:
    >
    > I have installed NSM 3.0.4.4 on a SLES 11 SP1, OES 11 SP1. After doing
    > battle with certificates and other issues, I had finally got it up and
    > running. The problem comes when I attempt to complete the migration
    > process from the previous Netware server. The NSM engine stops running
    > and shows its running state as unused. Restarting the engine via
    > nsmengine-config starts the engine, but it inevitably stops. I have
    > been able to get it to start and continue to run by renaming the
    > /var/opt/novell/storagemanager/engine/data folder and creating an empty
    > one, which triggers a new setup wizard.
    >
    > I resigned myself to recreating the policies, so I ran through the
    > wizard and did not import the old policies. I left the GSR Collecter
    > running overnight and found that the engine had once again stop and I am
    > unable to get it to stay up. I have included the last lines of the
    > nsmengined.log. I am at my wits end.
    >
    > Thank You,
    > Kenneth
    >
    > 01 2012-08-07 08:00:02 -14400 3 0001 30152 7ff78be7c710 ML: Called
    > LoadStorageResourcesFromCache(), Result = 45.
    > 01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called
    > LoadStorageResourcesFromDS(false), bResult = 1.
    > 01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called
    > LoadAgentManager(), Result = 0.
    > 01 2012-08-07 08:00:02 -14400 5 0006 30152 7ff78be7c710 AM: Starting
    > Delegation Subsystem...
    > 01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called
    > AgentManager().ResumeDelegating(), Result = 0.
    > 01 2012-08-07 08:00:02 -14400 5 8008 30152 7ff7896a2710
    > CCStorageResources::WorkerThreadFunction() - Rebuild is in initial
    > state.
    > 01 2012-08-07 08:00:02 -14400 5 0009 30152 7ff78be7c710 Sched:
    > Attempted initial Database initialization, Result = 0.
    > 01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called
    > LoadScheduler(), Result = 0.
    > 01 2012-08-07 08:00:02 -14400 5 0009 30152 7ff78be7c710 Sched: Starting
    > Scheduler Subsystem...
    > 01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called
    > Scheduler().ResumeSchedules(), Result = 0.
    > 01 2012-08-07 08:00:02 -14400 5 0001 30152 7ff78be7c710 ML: Called
    > LoadEventMonitorListFromCache(), Result = 0.
    > 01 2012-08-07 08:00:02 -14400 5 0002 30152 7ff78be7c710 ML: Loading
    > saved SR state information...
    > 01 2012-08-07 08:00:02 -14400 5 0002 30152 7ff78be7c710 GL: Failed to
    > load the SR state information, nResult = 53.
    > 01 2012-08-07 08:00:02 -14400 5 0002 30152 7ff78be7c710 ML: Loading SR
    > proxy information...
    > 01 2012-08-07 08:00:02 -14400 5 0002 30152 7ff78be7c710 GL: Attempted
    > to load the SR proxy information, nResult = 0.
    > 01 2012-08-07 08:00:02 -14400 5 0004 30152 7ff7763b3710 PC: No
    > policies currently available
    > 01 2012-08-07 08:00:02 -14400 5 0009 30152 7ff7773b5710 Sched: Loaded 0
    > tasks scheduled for the next 57598 seconds, Result = 0.
    > 01 2012-08-07 08:00:02 -14400 5 0009 30152 7ff7773b5710 Sched, ST: No
    > Scheduled tasks set to execute today. Scheduled Tasks for tomorrow will
    > be loaded in 57598 seconds.
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Policy
    > Cache Rebuild complete.
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Rebuilding
    > Managed Path Cache...
    > 01 2012-08-07 08:00:03 -14400 5 0004 30152 7ff78be7c710 MPC: Loading 0
    > path entries into cache...
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Managed
    > Path Cache Rebuild complete.
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Rebuilding
    > Event Cache...
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Event
    > Cache Rebuild complete.
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Rebuilding
    > Event Processor Data...
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Event
    > Processor Data Rebuild complete.
    > 01 2012-08-07 08:00:03 -14400 5 0002 30152 7ff78be7c710 GL: Base
    > schema appears to be properly extended.
    > 01 2012-08-07 08:00:03 -14400 5 0002 30152 7ff78be7c710 GL: Action
    > Object schema appears to be extended.
    > 01 2012-08-07 08:00:03 -14400 5 0002 30152 7ff78be7c710 GL: Legacy
    > Collaborative Homedirectory attribute cccFSFactoryGroupHomedir is
    > available.
    > 01 2012-08-07 08:00:03 -14400 5 0002 30152 7ff78be7c710 GL: Legacy
    > Quota Manager attribute cccFSFactoryHomeDirectoryQuota is available.
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Starting
    > the Task Manager subsystem...
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Completed
    > starting the Task Manager subsystem, bResult = 1.
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Starting
    > the HTTPx Server subsystem...
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Completed
    > starting the HTTPx Server subsystem, bResult = 1.
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Starting
    > Incoming Event Parser subsystem...
    > 01 2012-08-07 08:00:03 -14400 2 0001 30152 7ff78be7c710 ML: Completed
    > initialization. Engine is now running.
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 WD: Engine
    > mainline thread is entering the Watch Dog function...
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 WD: Engine is
    > running...
    > 01 2012-08-07 08:00:03 -14400 3 0001 30152 7ff78be7c710 WD: Detected
    > that the Incoming Event Parser has terminated.
    > 01 2012-08-07 08:00:03 -14400 5 0001 30152 7ff78be7c710 ML: Starting
    > Incoming Event Parser subsystem...
    > 01 2012-08-07 08:00:03 -14400 5 8006 30152 7ff75fb86710 TM:
    > TaskManagerThread() - Thread has started.
    >
    >
    Kenneth,
    Please send us an email at [email protected] so we can look into
    this issue further. If you could attach your latest Engine log file from
    /var/opt/novell/storagemanager/engine/log, that would be helpful -- the
    excerpt you've posted doesn't seem to provide much information about the
    problem. Thanks!
    - NFMS Support Team

  • How do I replace a single record in a logfile?

    I'm developing a program that saves data to a logfile containing clusters. Every record (cluster) contains data from the tested item. The item can be as many as 400+. The file refnum is opened during the whole test. If the test of item 50 (this will be record 49) failed and needs to tested again I want to replace position 49 (counting from zero) with the "correct" test result, but LabVIEW insists and writes them as record 50 and thus item 50 has records as: 49 (failed; want to delete this) and the correct record 50 (this should be record 49 though...). I thought one should connect the "pos offset" at the desired place where the record should be stored in, but one can't connect to that when using datalog file... . I've also
    tried to use the "seek"-vi to position the refnum at the desired location but it ignores it... . What shall I do? I have a working "viewer" for my log-files so they aren't corrupted, everything else is OK. Any suggestions?
    //Anders Boussard

    You do not have to use dataloging, you can write the data directly to disk. Writing to a file in binary is much less structured than datalogging but gives the flexibilty of overwriting records. With binary files you can even write several groups of data at a time so the limiting factor tends to be the speed of your harddrive. Consider using the intermediate file write VIs. As an test, I was able to write different clusters of an arrays having different lenghts to a file and read them perfectly. If your data is of uniform structure, you will have no problems doing the same. Look at the read to binary file.vi and write to binary file.vi for example code to get started. You will have to experiment to get the hang of it.
    Chapter 13 of th
    e LabVIEW User Manual discusses how to do file I/O and you can also find example code and tutorials on our website.
    Jeremy Braden
    National Instruments

  • How to specify logfile size at the time of adding to a member.

    Hi All,
    I am in the process of upgrading Oracle 9.0 to 10.1.
    I am following manual upgrad process.As per the recomendation from the pre-upgrade information script,i need to recreate redo-log files.
    Logfiles: [make adjustments in the current environment]
    --> E:\ORACLE\ORADATA\PRODB229\REDO03.LOG
    .... status="INACTIVE", group#="1"
    .... current size="1024" KB
    .... suggested new size="10" MB
    --> E:\ORACLE\ORADATA\PRODB229\REDO02.LOG
    .... status="INACTIVE", group#="2"
    .... current size="1024" KB
    .... suggested new size="10" MB
    --> E:\ORACLE\ORADATA\PRODB229\REDO01.LOG
    .... status="CURRENT", group#="3"
    .... current size="1024" KB
    .... suggested new size="10" MB
    WARNING: one or more log files is less than 4MB.
    Create additional log files larger than 4MB, drop the smaller ones and then
    upgrade.i can add redo member by the below command,but not able to specicy the size as 10M.I did some googling but no luck with that..
    SQL> ALTER DATABASE ADD LOGFILE MEMBER 'E:\oracle\oradata\prodb229\REDO01.rdo' T
    O GROUP 1;
    but it fails
    ALTER DATABASE ADD LOGFILE MEMBER 'E:\oracle\oradata\prodb229\REDO01.rdo' TO GROUP 2 SIZE 10M;
    ERROR at line 1:
    ORA-00933: SQL command not properly ended
    ~Thnx

    If you add a logfile to an existing group, you cannot specify the size for that file.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_1004.htm#i2079942
    <quote>
    ADD [STANDBY] LOGFILE MEMBER Clause Use the ADD LOGFILE MEMBER clause to add new members to existing redo log file groups. Each new member is specified by 'filename'. If the file already exists, it must be the same size as the other group members, and you must specify REUSE. If the file does not exist, Oracle Database creates a file of the correct size. You cannot add a member to a group if all of the members of the group have been lost through media failure.
    <quote>

  • Error while running adlnkoh.sh. Please check logfile

    Hi All,
    i got this error while cloning
    i ran perl adcfgclone.pl dbTechStack on db tier
    Starting ORACLE_HOME relinking...
    Instantiating adlnkoh.sh
    Starting relink of ORACLE_HOME - RDBMS
    Adding execute permission to :
    /erpapp/prod/proddb/10.2.0/appsutil/install/adlnkoh.sh
    Executing cmd :
    /erpapp/prod/proddb/10.2.0/appsutil/install/adlnkoh.sh
    adlnkoh.sh started at Wed Aug 3 13:57:29 UAE 2011
    logfile located in /erpapp/prod/proddb/10.2.0/install/make.log
    Error while running adlnkoh.sh. Please check logfile
    .end std out.
    .end err out.
    RC-00110: Error occurred while relinking of rdbms
    Raised by oracle.apps.ad.clone.ApplyDBTechStack
    RC-00119: Error occurred while relinking {0}
    Raised by oracle.apps.ad.clone.ApplyDBTechStack
    Completed relinking.
    ApplyDBTechStack Completed Successfully.
    when i checked the relink log
    Starting: "make -if /erpapp/prod/proddb/10.2.0/rdbms/lib/ins_rdbms.mk rac_off"
    rm -f /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl
    rm: /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl not removed. Text file busy
    *** Error exit code 2 (ignored)
    cp /erpapp/prod/proddb/10.2.0/lib//libskgxpd.sl /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl
    cp: cannot create /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl: Text file busy
    *** Error exit code 1 (ignored)
    rm -f /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl
    rm: /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl not removed. Text file busy
    *** Error exit code 2 (ignored)
    cp /erpapp/prod/proddb/10.2.0/lib//libskgxns.sl \
    /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl
    This the error.
    Please help me to reslove this error.

    ebs -11.5.10.2 and db is 10.2.0.4
    OS -HPUX-11.23
    the erro i have post are from logs
    This error i got after the adcfgclone on Dbtier..
    Starting ORACLE_HOME relinking...
    Instantiating adlnkoh.sh
    Starting relink of ORACLE_HOME - RDBMS
    Adding execute permission to :
    /erpapp/prod/proddb/10.2.0/appsutil/install/adlnkoh.sh
    Executing cmd :
    /erpapp/prod/proddb/10.2.0/appsutil/install/adlnkoh.sh
    adlnkoh.sh started at Wed Aug 3 13:57:29 UAE 2011
    logfile located in /erpapp/prod/proddb/10.2.0/install/make.log
    Error while running adlnkoh.sh. Please check logfile
    .end std out.
    .end err out.
    RC-00110: Error occurred while relinking of rdbms
    Raised by oracle.apps.ad.clone.ApplyDBTechStack
    RC-00119: Error occurred while relinking {0}
    Raised by oracle.apps.ad.clone.ApplyDBTechStack
    Completed relinking.
    ApplyDBTechStack Completed Successfully.
    This error is when i checked the /erpapp/prod/proddb/10.2.0/install/make.log
    Starting: "make -if /erpapp/prod/proddb/10.2.0/rdbms/lib/ins_rdbms.mk rac_off"
    rm -f /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl
    rm: /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl not removed. Text file busy
    *** Error exit code 2 (ignored)
    cp /erpapp/prod/proddb/10.2.0/lib//libskgxpd.sl /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl
    cp: cannot create /erpapp/prod/proddb/10.2.0/lib/libskgxp10.sl: Text file busy
    *** Error exit code 1 (ignored)
    rm -f /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl
    rm: /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl not removed. Text file busy
    *** Error exit code 2 (ignored)
    cp /erpapp/prod/proddb/10.2.0/lib//libskgxns.sl \
    /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl
    cp: cannot create /erpapp/prod/proddb/10.2.0/lib/libskgxn2.sl: Text file busy
    *** Error exit code 1 (ignored)
    ar cr /erpapp/prod/proddb/10.2.0/rdbms/lib/libknlopt.a /erpapp/prod/proddb/10.2.0/rdbms/lib/ksnkcs.o
    Completed...
    Starting: "make -if /erpapp/prod/proddb/10.2.0/rdbms/lib/ins_rdbms.mk ioracle"
    chmod 755 /erpapp/prod/proddb/10.2.0/bin
    mv -f /erpapp/prod/proddb/10.2.0/bin/oracle /erpapp/prod/proddb/10.2.0/bin/oracleO
    mv: /erpapp/prod/proddb/10.2.0/bin/oracleO: cannot write: Text file busy
    *** Error exit code 1 (ignored)
    mv /erpapp/prod/proddb/10.2.0/rdbms/lib/oracle /erpapp/prod/proddb/10.2.0/bin/oracle
    mv: /erpapp/prod/proddb/10.2.0/bin/oracle: cannot write: Text file busy
    *** Error exit code 1 (ignored)
    chmod 6751 /erpapp/prod/proddb/10.2.0/bin/oracle
    Completed...
    even after this error /warning i ran
    Perl adcfgclone.pl dbconfig
    i got this error
    Verifying Database Connection...
    RC-40201: Unable to connect to Database pcln.
    Enter the Database listener port [1521]:1521
    RC-40201: Unable to connect to Database pcln.

  • Create ONLINE logfile in physical standby database

    We have created a physical standby database with rman duplicate command on a remote server
    "duplicate target database for standby dorecover nofilenamecheck"
    When I see the standby server...Online logfiles are not created however its relevant entries are there in V$LOG and V$LOGFILE views.
    I guess it is the default behaviour of duplicate command in RMAN and we can not specify LOGFILE clause when we create standby database.
    Now the problem is we could not drop the online logfile on standby database since it's status is "CURRENT", "ACTIVE".
    Since the ONLINE LOGFILE are not actually created , "ALTER DATABASE CLEAR LOGFILE GROUP " command returns with error as it could not find the file in the server.
    So How we can drop the current/active online logfile and add new ones in standby db?

    I'm assuming you have physical standby. Here are step I did in the past.
    1) create a backup control file
    2) bring the database back using the "recreate control file" it the trace file BUT you need to remove or comment out the line that has the corrupt or missing redo log file. And don't forget to add the tempfile after you recreate the controlfile
    example:
    alter database backup controlfile to trace;
    STARTUP NOMOUNT
    CREATE CONTROLFILE REUSE DATABASE "ORCL" NORESETLOGS FORCE LOGGING ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
    LOGFILE
    GROUP 1 '/oracledata/orcl/redo01.log' SIZE 200M,
    GROUP 2 '/oracledata/orcl/redo02.log' SIZE 200M,
    GROUP 3 '/oracledata/orcl/redo03.log' SIZE 200M,
    # GROUP 3 '/oracledata/orcl/redo03.log' SIZE 200M
    -- STANDBY LOGFILE
    -- GROUP 10 '/oracledata/orcl/redostdby04.log' SIZE 200M,
    -- GROUP 11 '/oracledata/orcl/redostdby05.log' SIZE 200M
    DATAFILE
    '/oracledata/orcl/system01.dbf',
    '/oracledata/orcl/undotbs01.dbf',
    '/oracledata/orcl/sysaux01.dbf',
    '/oracledata/orcl/users01.dbf'
    CHARACTER SET WE8ISO8859P1
    If you just want to add the standby redo log then using this command.
    alter database add standby logfile
    '/<your_path>/redostdby01.log' size 200M reuse,

  • Ovi ... Store? - Let's go back to NSM...

    Nokia and/or Ovi team,
    I know Ovi Store launched recently, and there are lots of things you are probably working on to get it going the right way.
    I love looking for the best price when buying software. When I heard of Ovi Store, I knew there would be some software at a really tempting price, and so it is: Smartphoneware apps are really cheap (there may be other developers/apps as well, but I kept focus on Smartphoneware).
    Having lots of previous purchases in other online stores (Handango, SymbianGear, Mobihand, SmartSym, Nokia Software Market, etc) I thought it would be similar... I was way WRONG!
    Although there was not mention on how the new Ovi Store works, I decided to try it by buying an app and see how it worked. I chose a cheap, yet useful one: Smartphoneware Best Taskman.
    I already had an old version (1.0) installed in my device, in trial mode.
    What comes below is a list of things that shocked me and are definitely not properly implemented... at all!!
    There should be an option to buy from a PC, like there was in Nokia Software Market (NSM).
    How come the app is automatically installed? What if I did not wanted to install it right away? (In fact, in my case I wanted to keep the old version and try the serial number once purchased, which leads to...)
    There are options to choose where to install the app (Phone memory or memory card); there should also be an option to install automatically or download for installing later.
    I got an e-mail with my purchase information, but there was no serial #. What if I want/need to reset the phone and reinstall the app? Or what if I want to install a new version and it requests the serial? As far as I've read, many users reported you can't even re-download/install the app; you have to purchase it again (or place a complaint and wait until you hear back from support, which, as a customer experience, sucks!)
    Even though I searched everywhere, there was no mention in the site or app about how the store works (the fact that it installs apps automatically, the fact that it doesn't give you a serial number, etc). This kind of information should be posted somewhere in the ovi store site and/or app so users are aware (I know, this would kill your business...). Should I had known it worked this way, I would have spent a little bit more and make sure I got a serial number.
    Hope you guys are working already (or start) on these issues ASAP, else I can assure you Ovi Store won't last too long... and won't make much profit for the company.
    Let's see it the other way as well: Maybe I'm a very dumb customer who is used to other "standard" online stores. Ovi Store is quite different.
    Suggestion: Create and promote a tutorial on how to use it and which options it has (including, if available, how to recover the software and or serial number(s)).
    A very frustrated customer.
    Gonzo
    Message Edited by pool7 on 06-Jun-2009 04:55 AM

    It's great to know how things work first-hand.  If anyone wants to know how the store purchase process works, you can read this blog entry (among other similar blog postings) to get an overview:
    Buying through the Ovi store
    Lumia 920, Lumia 800
    Nokia N8-00 (NAM, Product Code: 059C8T6), Symbian Belle, Type RM-596, 111.030.0609
    Nokia 5800 XpressMusic (NAM, Product Code: 0577454) Software v51.2.007, Type RM-428

  • Acrobat 9 Pro Extended: How to activate logfiles?

    Dear members
    Does  someone now how to activate or read out logfiles of a 3DPDF conversion? 
    I would  like to convert Catia V5 data via command line, but cannot find or  activate logs...
    Currently I'm only using the OLE objects "AcroExch.AVDoc" and  "AcroExch.App". In the API reference at livedocs.adobe.com there is  nothing about it.
    Thanks in advance for your help.
    Regards
    hanseat82

    That's clear, we are calling the Acrobat over OLE via Perl, scrap:
    use Win32::OLE;
    $infile = $ARGV[0];
    $outfile = $ARGV[1];
    $NOSAVE = -1;
    $PDSAVEFULL = 1;
    ## Create Objects
    $avdoc_obj = Win32::OLE->new('AcroExch.AVDoc') || die "new: $!";
    $app_obj = Win32::OLE->new('AcroExch.App') || die "new: $!";
    ## Open Inputfile
    $avdoc_obj->Open($infile, $infile);
    ## Get PDF Document as Object from Inputfile
    $pddoc_obj = $avdoc_obj->GetPDDoc();
    And I found the Java code from you, Adobe:
    #            catch (ConversionException e)
    #                System.out.println(e.getErrorCode());
    #                System.out.println(e.getConversionLog());
    Can you give me name me the import module of class "ConversionException" or in general how to go on with that?
    Thanks and regards
    hanseat

  • How to include a button in report header like rowspan? &logfile generation?

    I am really new to this form and I have some questions on the APEX HTML DB:
    The project I need to work on is like this: Based on some criteria, I need to do a database lookup. Then in the result, I need to be able to edit the individual record. So far it is no problem. Here comes the part that I am not sure how to handle or if it can be handled.
    1.     We need to have the ability to copy down certain columns value to selected rows. Therefore, a "copy down" button needs to be included right under the column header cell. For example, based on certain criteria, the following product information is returned: product description, serial number, price, category etc. The “COPY DOWN” button needs to be listed right under the “serial number” table header and before the first row of the result, like “rowspan” in html table header. Once you click on “copy down”, the first rows’s serial number will be copied to all selected rows “serial number”. – Can a button be put right under a column header? If so, can I even reference the cell value in javascript?
    2.     Since we are doing the batch update, I need to have the ability to maintain a logfile to include date and time and what information is modified. – Can I generate a logfile from APEX HTML DB?
    I am not sure APEX HTML DB is a good candidate for the above two tasks.
    Your help is greatly appreciated.

    Hi user572980,
    Welcome to APEX... the more you'll do with it, the more you'll like it.
    1) Are you using a Tabbed Form? Or are you in a report? I try to get a better idea what you're trying to do. Did you already have a look at the templates? You can have a template for the report for ex., in that you can adapt like you wish (in your case put a button under the column header).
    You can also reference the cell values, but for that I should know where you're in (form, report). When you click right on the page and have a look at Page Source you see what item (reference) it is.
    2) You can make a logfile yes. Are you using packages to do the batch update? In that you can make some code to store the history. In otherwords, out-of-the-box in APEX I don't think it exists, but with PLSQL you can do it (so also in APEX). For ex. the plsql package stores it in a history table and you built a report on top of that.
    Dimitri

  • Oracle initialization in progress due to logfile corruption - startup error

    Hi All!
    I m using Oracle Release 9.2.0.1.0. Due to power outage, it seems that one of its Redo files is corrupt and it is not getting started. My database is running in no archive mode and I donot have any backup for my data.
    I have performed the following action but in vain. Please help me to get it started.
    Thanks in advance.
    Muhammad Bilal
    SQL*Plus: Release 9.2.0.1.0 - Production on Mon Jan 4 19:22:16 2010
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    SQL> show user
    USER is "SYS"
    SQL> shutdown immediate;
    ORA-01109: database not open
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 135338868 bytes
    Fixed Size 453492 bytes
    Variable Size 109051904 bytes
    Database Buffers 25165824 bytes
    Redo Buffers 667648 bytes
    Database mounted.
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 88880 change 182882946 time 01/04/2010
    08:33:19
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> select group#,sequence#,archived,status from v$log;
    GROUP# SEQUENCE# ARC STATUS
    1 911 NO CURRENT
    2 909 NO INACTIVE
    3 910 NO INACTIVE
    SQL> alter database clear logfile group 1;
    alter database clear logfile group 1
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> alter database clear unarchived logfile group 1;
    alter database clear unarchived logfile group 1
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 1;
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 1
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE 'D:\ORACLE\ORADATA\DB\REDO01.LOG';
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> alter system switch logfile;
    alter system switch logfile
    ERROR at line 1:
    ORA-01109: database not open
    SQL> ALTER DATABASE OPEN RESETLOGS;
    ALTER DATABASE OPEN RESETLOGS
    ERROR at line 1:
    ORA-01139: RESETLOGS option only valid after an incomplete database recovery
    SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE 'D:\ORACLE\ORADATA\DB\REDO01.LOG';
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO01.LOG';
    alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> select member,status from v$logfile;
    MEMBER STATUS
    D:\ORACLE\ORADATA\DB\REDO03.LOG STALE
    D:\ORACLE\ORADATA\DB\REDO02.LOG
    D:\ORACLE\ORADATA\DB\REDO01.LOG
    SQL> alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO03.LOG';
    Database altered.
    SQL> alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO01.LOG';
    alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of thread 1
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> alter database clear logfile 'D:\ORACLE\ORADATA\DB\REDO02.LOG';
    Database altered.
    SQL> recover database until cancel;
    ORA-00279: change 182763162 generated at 01/03/2010 20:00:21 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORA92\RDBMS\ARC00911.001
    ORA-00280: change 182763162 for thread 1 is in sequence #911
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    CANCEL
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\DB\SYSTEM01.DBF'
    ORA-01112: media recovery not started
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\DB\SYSTEM01.DBF'
    SQL> recover database;
    ORA-00283: recovery session canceled due to errors
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 88880 change 182882946 time 01/04/2010 08:33:19
    ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\DB\REDO01.LOG'
    SQL> recover database until cancel;
    ORA-00279: change 182882944 generated at 01/04/2010 08:33:10 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORA92\RDBMS\ARC00911.001
    ORA-00280: change 182882944 for thread 1 is in sequence #911
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    AUTO
    ORA-00308: cannot open archived log 'D:\ORACLE\ORA92\RDBMS\ARC00911.001'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-00308: cannot open archived log 'D:\ORACLE\ORA92\RDBMS\ARC00911.001'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\DB\SYSTEM01.DBF'
    SQL> select group#,sequence#,archived,status from v$log;
    GROUP# SEQUENCE# ARC STATUS
    1 911 NO CURRENT
    2 0 NO UNUSED
    3 0 NO UNUSED
    SQL> alter system switch logfile;
    alter system switch logfile
    ERROR at line 1:
    ORA-01109: database not open
    SQL> ALTER SYSTEM CHECKPOINT GLOBAL;
    ALTER SYSTEM CHECKPOINT GLOBAL
    ERROR at line 1:
    ORA-01109: database not open
    SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
    ORA-00279: change 182763162 generated at 01/03/2010 20:00:21 needed for thread 1
    ORA-00289: suggestion : D:\ORACLE\ORA92\RDBMS\ARC00911.001
    ORA-00280: change 182763162 for thread 1 is in sequence #911
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    CANCEL
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\DB\SYSTEM01.DBF'
    ORA-01112: media recovery not started
    SQL> ALTER DATABASE OPEN RESETLOGS;
    ALTER DATABASE OPEN RESETLOGS
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\DB\SYSTEM01.DBF'
    ---------------------------------------------------------------------------------------------------------------------------------

    Hi Bilal,
    1)take a trace of controlfile..the file will be in udump destination
    SQL>alter database backup controlfile to trace;
    *2)take the whole database cold backup...IMMEDIATLY*
    3) bring another pc install the same version oracle software on it
    4)copy datafiles,parameterfile.listener.ora,tnsnames.ora from the backup
    5)Edit the parameter file and do the necessary changes
    -control files new location
    -database new name
    -backgrounddump destination new location
    -userdump destination new location
    -coredump destination new location
    And save the parameterfile as init<SID>.ora and copy it to ORACLE_HOME\database directory.
    5)edit the trace file u got in step one and remove everything above
    STARTUP NOMOUNT
    and below
    CHARACTER SET XXXXXXX
    Changes should be done to paths of the datafiles and logfiles(as per physical structure of new database) change reuse to set ,new database name and noresetlogs to resetlogs in that tracefile as we are not using the logs from the source database.
    EG:CREATE CONTROLFILE SET DATABASE *"DG9A"* RESETLOGS
    and save that file as create_ct.sql
    5)Create a oracle service using ora dim utility from command prompt.
    c:\ oradim -new -sid SIDNAME -intpwd fbifbi -startmode auto -pfile d:\oracle\ora81\database\initSID.ora--->what ever the name u gave in parameter file and according to your environmetn
    6)do the changes in the listener.ora and tnsnames.ora as per the new machine
    5)set oracle sid and log into sql
    c:\set oracle_sid=sidname
    c:\sqlplus “/as sysdba”
    SQL>@create_ct.sql
    then open database with
    SQL>alter database open resetlogs;
    check google if ur confused in creating control file and oracle service check for cold database cloning
    Hope you will recover
    Regards
    Edited by: hungry_dba on Jan 5, 2010 9:34 AM

  • Problem to send result from log file, the logfile is to large

    Hi SCOM people!
    I have problem when monitoring a log file on a Red Hat system, I get a alert that tells me that the log file is too large to send (see the alert context below).I guess that the problem is that the server logs to much between the 5 minutes that SCOM checks.
    Any ideas how to solve this?
    Date and Time: 2014-07-24 19:50:24
    Log Name: Operations Manager
    Source: Cross Platform Modules
    Event Number: 262
    Level: 1
    Logging Computer: XXXXX.samba.net
    User: N/A
     Description:
    Error scanning logfile / xxxxxxxx / server.log on values ​​xxxxx.xxxxx.se as user <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser>; The operation succeeded and cannot be reversed but the result is too large to send.
    Event Data:
    < DataItem type =" System.XmlData " time =" 2014-07-24T19:50:24.5250335+02:00 " sourceHealthServiceId =" 2D4C7DFF-BA83-10D5-9849-0CE701139B5B " >
    < EventData >
      < Data > / xxxxxxxx / server.log </ Data >
      < Data > ​​xxxxx.xxxxx.se </ Data >
      < Data > <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser> </ Data >
      < Data > The operation succeeded and cannot be reversed but the result is too large to send. </ Data >
      </ EventData >
      </ DataItem >

    Hi Fredrik,
    At any one time, SCX can return 500 matching lines. If you're trying to return > 500 matching lines, then SCX will throttle your limit to 500 lines (that is, it'll return 500 lines, note where it left off, and pick up where it left off next time log files
    are scanned).
    Now, be aware that Operations Manager will "cook down" multiple regular expressions to a single agent query. This is done for efficiency purposes. What this means: If you have 10 different, unrelated regular expressions against a single log file, all of
    these will be "cooked down" and presented to the agent as one single request. However, each of these separate regular expressions, collectively, are limited to 500 matching lines. Hope this makes sense.
    This limit is set because (at least at the time) we didn't think Operations Manager itself could handle a larger response on the management server itself. That is, it's not an agent issue as such, it's a management server issue.
    So, with that in mind, you have several options:
    If you have separate RegEx expressions, you can reconfigure your logging (presumably done via syslog?) to log your larger log messages to a separate log file. This will help "cook down", but ultimately, the limit of 500 RegEx results is still there; you're
    just mitigating cook down.
    If a single RegEx expression is matching > 500 lines, there is no workaround to this today. This is a hardcoded limit in the agent, and can't be overridden.
    Now, if you're certain that your regular expression is matching < 500 lines, yet you're getting this error, then I'd suggest contacting Microsoft Support Services to open an RFC and have this issue escalated to the product team. Due to a logging issue
    within logfilereader, I'm not certain you can enable tracing to see exactly what's going on (although you could use command line queries to see what's happening internally). This is involved enough where it's best to get Microsoft Support involved.
    But as I said, this is only useful if you're certain that your regular expression is matching < 500 lines. If you are matching more than this, this is a known restriction today. But with an RFC, even that could at least be evaluated to see exactly the
    load > 500 matches will have on the management server.
    /Jeff

  • How can i download logfile in session file?

    Hi gurus ,
               can give me code for download of error records stored in  logfile?

    Hi satheesh,
                   There is no option in sm35 to download ,so we use program to download by using session name in selection RSBDCLOG or use We have the tables BDCLD and BDCLM, which will capture the log details inthe session. Firstly, sesssion should be processed. After that log will be created. Then caputure the information into an internal table using BDCLM and BDCLD.

Maybe you are looking for

  • Can't Install iTunes 8.1

    i have upgraded my iTunes each time there is a new version. and now i can't update to 8.1 i get a Error Message of. "There is a problem with this windows Installer package, A program run as part of the setup did not finish as expected. Contact your s

  • Book Module - how to change page size?

    I'd like to design book pages for my existing hard covers - A4 landscape and portrait. Also for specific 12x12 cover which takes 11.69" x 12" paper. Have watched various YouTubes, search within support - but haven't found a reference to add preferred

  • Deletion of a particular string in a sentence

    Hi , I have an internal table with lines of strings. Within each line i need to search for a string and delete it.. Can you help me give some ideas as to how to do this? eg : Single line in internal table would be : '<LS>Table fields:</> The field na

  • GetNameInNamespace

    I am attempting to run an application and I am getting an error java.lang.NoSuchMethodException: getNameInNamespace. My code does not mention this method and the jar that has it is in my classpath. So I was wondering. where does this method live and

  • Installing Snow Leopard on 10.5.8 macbook.

    I was trying to install snow leopard on 10.5.8 white macbook. It would not let me. I found a thread that said boot from disc - partition drive setting aside 5GB to install snow leopard- and it will work to install to the other ~75GB, or whatever. I'v