No userstore Active in trc file

Hi all, i need some big help here..
im upgrading my Java server from 640Sp19 to 7.0sp13.. ( the installation default is SP9 then i have put it SP13 to the epsinbox of the jupgrade )
my Os is Aix 5.3ML6
my DB ora 10.2.0.2
now my jupgrade is in START_J2EE_INITIAL phase..
thus i hit an error, and found the message No userstore is set.
what my understanding during upgrade the SAP* will taking care all the login right ?
found a couple of notes such as Note 1042418 - "No active userstore is set." after upgrade to SPS11/12
but i think its not relevant, the have found couple of messages in the forum, one of it saying about db character set, also not applicable to our problem.
SCS01 can start ok, only the JC00 as the instance that cannot be started because of the message No user store active.
Any suggestion guys ?
Message was edited by:
        Muda Ikhsan

different error coming up

Similar Messages

  • SQL Nexus - ReadTrace - Cannot able to load Trc file

    When loading the TRC file getting the following error.
    Windows 7 Sp1 
    SQL 2012 Developer Edition (Microsoft SQL Server 2012 - 11.0.2100.60 (Intel X86) )
    Here is the ReadTrace Log
    03/13/15 13:53:05.247 [0X00001F94] Attempting to cleanup existing RML files from previous execution
    03/13/15 13:53:05.248 [0X00001F94] Using extended RowsetFastload synchronization
    03/13/15 13:53:05.249 [0X00001F94] Establishing initial database connection
    03/13/15 13:53:05.249 [0X00001F94] Server: SQL2012
    03/13/15 13:53:05.250 [0X00001F94] Database: sqlnexus
    03/13/15 13:53:05.250 [0X00001F94] Authentication: Windows
    03/13/15 13:53:05.332 [0X00001F94] Using SQLOLEDB version 11.0.2100.60
    03/13/15 13:53:05.501 [0X00001F94] Connected to SQL Server Version, Major: 11, Minor: 0, Build: 2100
    03/13/15 13:53:05.501 [0X00001F94] Creating or clearing the performance database
    03/13/15 13:53:06.928 [0X00001F94] Processing file: C:\Temp\SQLDiagOutput\SQL2012_SQLDIAG__sp_trace.trc (SQL 2012 / SQL Azure*)
    03/13/15 13:53:06.928 [0X00001F94] Validating core events exist
    03/13/15 13:53:06.929 [0X00001F94] Validating necessary events exist for analysis
    03/13/15 13:53:06.929 [0X00001F94] WARNING: The following trace events were not captured: [SP:StmtStarting, SP:StmtCompleted, Showplan Statistics]. Review the help file to ensure that you have collected the appropriate set of events and columns for your intended
    analysis.
    03/13/15 13:53:06.946 [0X00001F94] Events Read: 1000 Queued: 599 Processed/sec: 213
    03/13/15 13:53:06.959 [0X00001F94] Events Read: 2000 Queued: 1285 Processed/sec: 377
    03/13/15 13:53:06.968 [0X00001F94] Events Read: 3000 Queued: 2285 Processed/sec: 376
    03/13/15 13:53:06.991 [0X00001F94] Events Read: 4000 Queued: 3285 Processed/sec: 371
    03/13/15 13:54:23.751 [0X000010A4] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.694 [0X00001534] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.771 [0X00001534] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.771 [0X00001534] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.776 [0X00000970] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.776 [0X00001B94] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.776 [0X00001CEC] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.776 [0X00001630] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.776 [0X00000B34] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.776 [0X00001588] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.776 [0X00001804] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.776 [0X00001148] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.776 [0X0000073C] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.776 [0X00001E7C] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.776 [0X00000970] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.776 [0X00001B94] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.776 [0X00001CEC] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.777 [0X00001630] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.777 [0X00000B34] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.777 [0X00001588] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.777 [0X00001804] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.777 [0X00001148] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.777 [0X0000073C] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.781 [0X00001E7C] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.781 [0X00000970] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.781 [0X00001B94] ERROR: Worker message stream was expected and does not exist for worker pool id: 1
    03/13/15 13:54:23.781 [0X00001CEC] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.781 [0X00001630] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.781 [0X00000B34] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.781 [0X00001588] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.781 [0X00001804] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.781 [0X00001148] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.781 [0X0000073C] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.781 [0X00001E7C] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    03/13/15 13:54:23.792 [0X0000073C] ERROR: Unable to create worker thread failed with operating system error 0x00000008 (Not enough storage is available to process this command)
    03/13/15 13:54:23.792 [0X0000073C] ERROR: Worker stream initialization failed.
    03/13/15 13:54:23.792 [0X0000073C] ERROR: Worker message stream was expected and does not exist for worker pool id: 0
    I have lot of Disk space to accommodate to load TRC file, still not sure why we are getting the above message.
    Any help much appreciated.  

    Hi Vijay313,
    SQL Nexus provides you the option of breaking down the activity of each SPID into individual .trc files but they can only be directed to the %TEMP%\RML folder on your machine. So the error message might occur that you don’t have sufficient disk space to accommodate
    all the session specific Trace files (containing a few hundred GBs data at times) in the TEMP directory.
    To work around this issue, you can choose one of below options.
    1. Change the %TEMP% environment path on your machine to a different drive.
    2. Use the ReadTrace.exe (highly recommended) to generate the .trc files in the required path.
    3. Another option that cannot be controlled from SQL Nexus is the ability to add SPID, Hostname and Application filters while importing the data.
    For more details , please review the following link.
    http://troubleshootingsql.com/tag/data-analysis/
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • What does .trc file content mean?

    under my bdump dir, system always generate .trc file which has content like
    *** SERVICE NAME:() 2006-10-04 13:45:51.349
    *** SESSION ID:(211.1) 2006-10-04 13:45:51.349
    kcrrwkx: nothing to do (start)
    *** 2006-10-04 13:46:51.227
    tkcrrxmp: Stopping ARC2 to reduce ARCH processes from 3 to 2
    kcrrwkx: nothing to do (end)
    or
    *** SERVICE NAME:() 2006-10-04 13:45:51.303
    *** SESSION ID:(212.1) 2006-10-04 13:45:51.303
    kcrrwkx: nothing to do (start)
    kcrrwkx: nothing to do (end)
    *** 2006-10-04 13:50:51.197
    kcrrwkx: nothing to do (start)
    what does it mean :
    kcrrwkx: nothing to do (start)
    kcrrwkx: nothing to do (end)
    this is test 10.2.0.2 server on RHEL3, so very little activity. Is that massage about?

    HI, i too have same problem, below is arc trace file name and it's content pitt_arc0_23705.trc, any help is much appreciated
    PITT is the database name
    //apps/pitdbt1/oracle/admin/pitt/bdump/pitt_arc0_23705.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /apps/pitdbt1/oracle/product/10.2.0/db_1
    System name: Linux
    Node name: pitorastg01
    Release: 2.6.9-42.ELsmp
    Version: #1 SMP Wed Jul 12 23:32:02 EDT 2006
    Machine: x86_64
    Instance name: pitt
    Redo thread mounted by this instance: 1
    Oracle process number: 14
    Unix process pid: 23705, image: oracle@pitorastg01 (ARC0)
    *** SERVICE NAME:() 2007-03-16 10:13:29.999
    *** SESSION ID:(541.1) 2007-03-16 10:13:29.999
    kcrrwkx: nothing to do (start)
    *** 2007-03-16 10:14:29.835
    tkcrrxmp: Stopping ARC2 to reduce ARCH processes from 3 to 2
    kcrrwkx: nothing to do (end)
    *** 2007-03-16 10:15:34.852
    kcrrwkx: nothing to do (end)
    *** 2007-03-16 10:16:34.863
    kcrrwkx: nothing to do (end)
    *** 2007-03-16 10:17:34.874
    kcrrwkx: nothing to do (end)
    *** 2007-03-16 10:18:34.885
    kcrrwkx: nothing to do (end)
    *** 2007-03-16 10:19:34.897
    kcrrwkx: nothing to do (end)
    *** 2007-03-16 10:20:34.908
    kcrrwkx: nothing to do (end)
    *** 2007-03-16 10:21:34.920
    kcrrwkx: nothing to do (end)
    *** 2007-03-16 10:22:34.931

  • How to find out the Correct Controlfile script Trace .trc file in /bdump

    Hi Guys
    This is the most childish queries in this forum ..
    I wanna know how to Find out the correct trace file when we Issue alter database backup controlfile to trace at sqlprompt for Creating controlfile script.
    As i find it a bit confusing to go through the same Date & almost same time .trc files out of hundreds of trace file in /bdump directory to find the correct one.
    if we 've to find the Alert log file in /bdump directory $ ls -l al* & we get the alert log file .... if there is any similar way to find out the controlfile script Trace file ?
    Thanks & regards
    MZ

    MZ_AppsDBA wrote:
    Hi Guys
    This is the most childish queries in this forum ..
    I wanna know how to Find out the correct trace file when we Issue alter database backup controlfile to trace at sqlprompt for Creating controlfile script.
    As i find it a bit confusing to go through the same Date & almost same time .trc files out of hundreds of trace file in /bdump directory to find the correct one.
    if we 've to find the Alert log file in /bdump directory $ ls -l al* & we get the alert log file .... if there is any similar way to find out the controlfile script Trace file ?
    Thanks & regards
    MZcreation of the does not happen automatically. What script, and when does it run, do you have that creates the control file trace? Look for files in that time frame. Better, modify that script to specifically name the file .. BACKUP CONTROLFILE TO TRACE AS ....

  • How to make the active save-to-file with the data from GPIB ?

    Hello,,everybody !!
    I want to make the active save-to-file with the data from GPIB since starting the measurement. To save-to-file at the end of measurment is somehow risky for losing the data because my measurment have to take for long time (eg. 24-48 hours).
    Thank you in advance for anybody's help !!

    Thanks Dennis,
    I have already append, I got it but still have one small problem that between each line it has the blank line. Example as below ,
    16:40:33 54.24
    16:40:34 54.23
    16:40:35 54.24
    I want to get rid of the blank line in between.. Do you have the idea about it ?

  • Change status(Active/Inactive) of file adapter by the external command

    Hi, all.
      Is it possible to change status(Active/Inactive) of file adapter by the external command?
      Let's say, like
      "$ switch_file_adapter_status.sh <File Adapter name> <active|inactive>"
    or something like that.
      Actually, we have the following requirement for the file adapter.
      For example, we have many "file adapter to R/3" scenarios and we only want to allow data transfer between 9:00 a.m. to 9:00 p.m.(R/3 service time).
      In order to achieve this requirement, one option would be to switch file adapter status between active and inactive by external command line.
    And this command will be controlled by the job scheduler(like Tivoli).
      Does XI(3.0 or above) have this kind of feature?
      Best Regards.

    Hi,
    I don't think that this is possible.
    But one Solution for your problem could be a job on xi who execute the Queue.
    So that Adapter work the hole time and send the Message to XI Queue which process them not automatically.
    And the XI Queue you can trigger with a Job like normal R/3 Jobs.
    Hope that helps,
    Regards,
    Robin

  • Error shown in t1.trc file

    Hello all,
    i am working on EDI X12 over Internet transaction given in user guide.after completing everything in B2B UI tool on both acme and globalchips servers.i ran deq.bat file on both servers giving s*ubscriber=b2berroruser* and wait=1 in ipdequeue.properties file and there was nothing in t1.trc file generated.so i ran enq_850.bat on acme side and enq_855.bat on globalchips side and on both sides Action=Null.and i got the below message in t1.trc file on acme server.
    so can anyone please suggest what should i do next and what is the actual problem.
    MsgID = C0A8010111EAD13D58B00000DADA4C00-1
    ReplyToMsgID = null
    FromParty = GlobalChips
    ToParty = Acme
    EventName = Exception
    DoctypeName = Exception
    DoctypeRevision = 1.0
    MsgType = 3
    payload length = 8063
    *&lt;Exception xmlns="http://integration.oracle.com/B2B/Exception" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"&gt;*
    *&lt;correlationId&gt;null&lt;/correlationId&gt;*
    *&lt;b2bMessageId&gt;C0A8010111EAD13D58B00000DADA4C00-1&lt;/b2bMessageId&gt;*
    *&lt;errorCode&gt;AIP-50034&lt;/errorCode&gt;*
    *&lt;errorText&gt;Validation error and cannot create Functional Acknowledgment&lt;/errorText&gt;*
    *&lt;errorDescription&gt;*
    *&lt;![CDATA[Machine Info: (mcity94)*
    *Validation of Interchange parameters failed. Please verify all the Interchange parameters in the B2B configuration match the Group parameters in the message. Make sure that the ecs file for this Interchange is valid.*
    *Error Brief :*
    *5082: XEngine error - Guideline look-up failed.*
    *iAudit Report :*
    *&lt;?xml version="1.0" encoding="UTF-16"?&gt;&lt;AnalyzerResults Guid="{C4E22FFB-856B-4C67-BC08-6383C138EFBD}" InterchangeReceived="1" InterchangeProcessed="1" InterchangeAccepted="0"&gt; &lt;ExecutionDate&gt;Tuesday, January 06, 2009&lt;/ExecutionDate&gt; &lt;ExecutionTime&gt;11:23:24 PM (India Standard Time)&lt;/ExecutionTime&gt; &lt;AnalyzerReturn&gt;Failed&lt;/AnalyzerReturn&gt; &lt;NumberOfErrors&gt;1&lt;/NumberOfErrors&gt; &lt;ErrorByCategory&gt; &lt;Category Name="Rejecting"&gt; &lt;Severity Name="Normal"&gt;1&lt;/Severity&gt; &lt;/Category&gt; &lt;/ErrorByCategory&gt; &lt;Status&gt;Finished&lt;/Status&gt; &lt;DataFile&gt; &lt;FilePath&gt;D:\oracle\OraJ2EE\ip&lt;/FilePath&gt; &lt;FileName/&gt; &lt;LastModified/&gt; &lt;FileSize/&gt; &lt;DataURL&gt;file:\\D:\oracle\OraJ2EE\ip&lt;/DataURL&gt; &lt;/DataFile&gt; &lt;Interchange Guid="{CCCD3245-D8CC-4126-8C87-F1FA258BB076}" InterchangeAckCode="R" FunctionalGroupReceived="1" FunctionalGroupProcessed="1" FunctionalGroupAccepted="0" RError="1" NError="0" OtherWI="0"&gt; &lt;DataXPointer&gt; &lt;StartPos&gt;0&lt;/StartPos&gt; &lt;Size&gt;28433&lt;/Size&gt; &lt;/DataXPointer&gt; &lt;NodeInfo&gt; &lt;Links&gt; &lt;Link Name="InterchangeSenderQual"&gt;ZZ&lt;/Link&gt; &lt;Link Name="InterchangeSenderID"&gt;GlobalChips &lt;/Link&gt; &lt;Link Name="InterchangeReceiverQual"&gt;ZZ&lt;/Link&gt; &lt;Link Name="InterchangeReceiverID"&gt;Acme &lt;/Link&gt; &lt;Link Name="InterchangeControlVersion"&gt;00200&lt;/Link&gt; &lt;Link Name="Standard"&gt;X12&lt;/Link&gt; &lt;/Links&gt; &lt;Properties&gt; &lt;Property Name="InterchangeAuthorizationInfoQual"&gt;00&lt;/Property&gt; &lt;Property Name="InterchangeAuthorizationInfo"&gt; &lt;/Property&gt; &lt;Property Name="InterchangeSecurityInfoQual"&gt;00&lt;/Property&gt; &lt;Property Name="InterchangeSecurityInfo"&gt; &lt;/Property&gt; &lt;Property Name="InterchangeSenderQual"&gt;ZZ&lt;/Property&gt; &lt;Property Name="InterchangeSenderID"&gt;GlobalChips &lt;/Property&gt; &lt;Property Name="InterchangeReceiverQual"&gt;ZZ&lt;/Property&gt; &lt;Property Name="InterchangeReceiverID"&gt;Acme &lt;/Property&gt; &lt;Property Name="InterchangeDate"&gt;090106&lt;/Property&gt; &lt;Property Name="InterchangeTime"&gt;1009&lt;/Property&gt; &lt;Property Name="InterchangeControlStandard_RepeatingSeparator"&gt;U&lt;/Property&gt; &lt;Property Name="InterchangeControlVersion"&gt;00200&lt;/Property&gt; &lt;Property Name="InterchangeControlNumber"&gt;000001005&lt;/Property&gt; &lt;Property Name="InterchangeAckRequested"&gt;0&lt;/Property&gt; &lt;Property Name="InterchangeUsageIndicator"&gt;P&lt;/Property&gt; &lt;Property Name="InterchangeComponentElementSep"&gt;0x2b&lt;/Property&gt; &lt;Property Name="DecimalSeparator"/&gt; &lt;Property Name="ElementDelimiter"&gt;0x7e&lt;/Property&gt; &lt;Property Name="ReleaseCharacter"/&gt; &lt;Property Name="RepeatingSeparator"/&gt; &lt;Property Name="SegmentDelimiter"&gt;0x27&lt;/Property&gt; &lt;Property Name="SubelementDelimiter"&gt;0x2b&lt;/Property&gt; &lt;Property Name="InterchangeChildCount"&gt;1&lt;/Property&gt; &lt;Property Name="InterchangeTrailerControlNumber"&gt;000001005&lt;/Property&gt; &lt;/Properties&gt; &lt;/NodeInfo&gt; &lt;FunctionalGroup Guid="{46DA7B93-2365-4704-8156-A0FF38B242E2}" FunctionalGroupAckCode="R" TransactionSetsIncluded="1" TransactionSetsReceived="1" TransactionSetsProcessed="1" TransactionSetsAccepted="0" RError="0" NError="0" OtherWI="0"&gt; &lt;DataXPointer&gt; &lt;StartPos&gt;106&lt;/StartPos&gt; &lt;Size&gt;28311&lt;/Size&gt; &lt;/DataXPointer&gt; &lt;NodeInfo&gt; &lt;Links&gt; &lt;Link Name="GroupSenderID"&gt;GlobalChips&lt;/Link&gt; &lt;Link Name="GroupReceiverID"&gt;Acme&lt;/Link&gt; &lt;Link Name="GroupVersionNumber"&gt;004010&lt;/Link&gt; &lt;/Links&gt; &lt;Properties&gt; &lt;Property Name="GroupID"&gt;PR&lt;/Property&gt; &lt;Property Name="GroupSenderID"&gt;GlobalChips&lt;/Property&gt; &lt;Property Name="GroupReceiverID"&gt;Acme&lt;/Property&gt; &lt;Property Name="GroupDate"&gt;20090106&lt;/Property&gt; &lt;Property Name="GroupTime"&gt;1009&lt;/Property&gt; &lt;Property Name="GroupControlNumber"&gt;1005&lt;/Property&gt; &lt;Property Name="GroupAgencyCode"&gt;X&lt;/Property&gt; &lt;Property Name="GroupVersionNumber"&gt;004010&lt;/Property&gt; &lt;Property Name="GroupChildCount"&gt;1&lt;/Property&gt; &lt;Property Name="GroupTrailerControlNumber"&gt;1005&lt;/Property&gt; &lt;/Properties&gt; &lt;/NodeInfo&gt; &lt;Transaction Guid="{660C36B3-D2FD-4F7A-AE2A-3CE953611D55}" TransactionAckCode="R" RError="0" NError="0" OtherWI="0"&gt; &lt;DataXPointer&gt; &lt;StartPos&gt;157&lt;/StartPos&gt; &lt;Size&gt;28250&lt;/Size&gt; &lt;/DataXPointer&gt; &lt;NodeInfo&gt; &lt;Links&gt; &lt;Link Name="TransactionID"&gt;855&lt;/Link&gt; &lt;/Links&gt; &lt;Properties&gt; &lt;Property Name="TransactionID"&gt;855&lt;/Property&gt; &lt;Property Name="TransactionControlNumber"&gt;1005&lt;/Property&gt; &lt;Property Name="TransactionImplementationReference"/&gt; &lt;Property Name="TransactionChildCount"&gt;170&lt;/Property&gt; &lt;Property Name="TransactionTrailerControlNumber"&gt;1005&lt;/Property&gt; &lt;/Properties&gt; &lt;/NodeInfo&gt; &lt;/Transaction&gt; &lt;/FunctionalGroup&gt; &lt;InterchangeErrors&gt; &lt;Error ErrorCode="{3F43BFA3-7899-445C-A5B8-867089B8D4B2}" Severity="Normal" Category="Rejecting" Index="1" ID="50820000"&gt; &lt;ErrorBrief&gt;5082: XEngine error - Guideline look-up failed.&lt;/ErrorBrief&gt; &lt;ErrorMsg&gt;Failed guideline look-up.&lt;/ErrorMsg&gt; &lt;ErrorObjectInfo&gt; &lt;Parameter Name="ErrorLevel"&gt;0&lt;/Parameter&gt; &lt;Parameter Name="InterchangeControlVersion"&gt;00200&lt;/Parameter&gt; &lt;Parameter Name="InterchangeReceiverID"&gt;Acme &lt;/Parameter&gt; &lt;Parameter Name="InterchangeReceiverQual"&gt;ZZ&lt;/Parameter&gt; &lt;Parameter Name="InterchangeSenderID"&gt;GlobalChips &lt;/Parameter&gt; &lt;Parameter Name="InterchangeSenderQual"&gt;ZZ&lt;/Parameter&gt; &lt;Parameter Name="Name"&gt;XEngine&lt;/Parameter&gt; &lt;Parameter Name="Standard"&gt;X12&lt;/Parameter&gt; &lt;Parameter Name="_ec_dn_guid_"&gt;{CCCD3245-D8CC-4126-8C87-F1FA258BB076}&lt;/Parameter&gt; &lt;Parameter Name="_ec_index"&gt;0&lt;/Parameter&gt; &lt;Parameter Name="ec_error_scope"&gt;Interchange&lt;/Parameter&gt; &lt;/ErrorObjectInfo&gt; &lt;ErrorDataInfo&gt; &lt;Part1/&gt; &lt;ErrData/&gt; &lt;Part3/&gt; &lt;DataXPointer&gt; &lt;StartPos&gt;0&lt;/StartPos&gt; &lt;Size&gt;0&lt;/Size&gt; &lt;/DataXPointer&gt; &lt;/ErrorDataInfo&gt; &lt;/Error&gt; &lt;/InterchangeErrors&gt; &lt;/Interchange&gt;&lt;/AnalyzerResults&gt; ]]&gt;*
    *&lt;/errorDescription&gt;*
    *&lt;errorSeverity&gt;2&lt;/errorSeverity&gt;*
    *&lt;errorDetails&gt;*
    *&lt;parameter name="InterchangeControlStandard_RepeatingSeparator" value="U"/&gt;*
    *&lt;parameter name="InterchangeTrailerControlNumber" value="000001005"/&gt;*
    *&lt;parameter name="InterchangeChildCount" value="1"/&gt;*
    *&lt;parameter name="InterchangeTime" value="1009"/&gt;*
    *&lt;parameter name="InterchangeUsageIndicator" value="P"/&gt;*
    *&lt;parameter name="ErrorScope" value="Interchange"/&gt;*
    *&lt;parameter name="SubelementDelimiter" value="+"/&gt;*
    *&lt;parameter name="RepeatingSeparator" value=""/&gt;*
    *&lt;parameter name="InterchangeSecurityInfo" value=" "/&gt;*
    *&lt;parameter name="InterchangeReceiverQual" value="ZZ"/&gt;*
    *&lt;parameter name="DecimalSeparator" value=""/&gt;*
    *&lt;parameter name="InterchangeAuthorizationInfoQual" value="00"/&gt;*
    *&lt;parameter name="ElementDelimiter" value="~"/&gt;*
    *&lt;parameter name="InterchangeComponentElementSep" value="+"/&gt;*
    *&lt;parameter name="InterchangeControlVersion" value="00200"/&gt;*
    *&lt;parameter name="InterchangeAckRequested" value="0"/&gt;*
    *&lt;parameter name="InterchangeSenderQual" value="ZZ"/&gt;*
    *&lt;parameter name="InterchangeReceiverID" value="Acme "/&gt;*
    *&lt;parameter name="ReleaseCharacter" value=""/&gt;*
    *&lt;parameter name="InterchangeDate" value="090106"/&gt;*
    *&lt;parameter name="SegmentDelimiter" value="'"/&gt;*
    *&lt;parameter name="InterchangeControlNumber" value="000001005"/&gt;*
    *&lt;parameter name="InterchangeAuthorizationInfo" value=" "/&gt;*
    *&lt;parameter name="InterchangeSenderID" value="GlobalChips "/&gt;*
    *&lt;parameter name="InterchangeSecurityInfoQual" value="00"/&gt;*
    *&lt;/errorDetails&gt;*
    *&lt;/Exception&gt;*
    thanks
    sunny

    Hello Prasanna,
    Thanks for your reply,i have checked the parameters with the document protocol parameters,when i have compared i have noticed that some parameters given in the error are not there in the B2B UI and some parameters present in B2B UI are not shown in error part of t1.trc file i am providing the details below
    Not there in the b2b ui_
    &lt;parameter name="InterchangeTrailerControlNumber" value="000001014"/&gt;
    &lt;parameter name="InterchangeChildCount" value="1"/&gt;
    &lt;parameter name="ErrorScope" value="Interchange"/&gt;
    &lt;parameter name="RepeatingSeparator" value=""/&gt;
    &lt;parameter name="InterchangeAckRequested" value="0"/&gt;
    &lt;parameter name="InterchangeControlNumber" value="000001014"/&gt;
    Not there in the t1.trc file_
    Responsible Agency Code =X
    Replacement Character = 0*7c
    Security Information = " "
    Functional Group Time = system time
    Interchange ecs file = " " Browse
    group ecs file = " " Browse
    ImplementationClass = oracle.tip.adapter.b2b.document.edi.EDIDocumentPlugin
    Functional Group Date = system date
    Application Receiver's Code
    Application Sender's Code
    Tag Delimiter = 0*3d
    Having different values in the b2b ui_
    &lt;parameter name="SubelementDelimiter" value="+"/&gt; , value in the b2b ui is 0*2b
    &lt;parameter name="DecimalSeparator" value=""/&gt; , value in b2b ui is 0*2e
    &lt;parameter name="ElementDelimiter" value="~"/&gt; , value in b2b ui is 0*7e
    &lt;parameter name="InterchangeComponentElementSep" value="+"/&gt; , value in b2b ui is 0*2b
    &lt;parameter name="SegmentDelimiter" value=" ' "/&gt; , value in b2b ui is 0*27
    I have changed the values in B2B UI according to the values shown in the trace file,i changed 0*2b to "+" , 0*27 to" ' " , 0 * 7e to "~" and 0*2e to " "
    but it generated another error i.e
    5097: Invalid delimiter settings
    So i have replaced the overridden values with original values except the DecimalSeparator and restared the whole B2B instance and ran the bat files but the result is still the same it is showing the old error again i.e
    5082: XEngine error - Guideline look-up failed.
    So can anyone please suggest me a solution for this problem as it is very urgent and if this error is resolved then i can successfully finish all the remaining scenarios as the same error is persisting in the EDI EDIFACT scenario as well
    Thanx
    Sunny

  • How to interpert a trc file generated in the udump

    Hi
    I have seen my alert.log and it says see errors in /u01/xyzprod/10.2.0/admin/PROD_xyz/udump/prod_ora_28047.trc
    when i open the trc file then whole file is hexadecimal kind of stuff
    ORACLE_HOME = /u01/xyzprod/10.2.0
    System name:     HP-UX
    Node name:     prd-xyz
    Release:     B.11.31
    Version:     U
    Machine:     ia64
    Instance name: PROD
    Redo thread mounted by this instance: 1
    Oracle process number: 94
    Unix process pid: 28047, image: oracle@prd-xyz
    *** ACTION NAME:() 2009-07-11 09:00:14.292
    *** MODULE NAME:(C:\Documents and Settings\All Users\Start Menu\P) 2009-07-11 09:00:14.292
    *** SERVICE NAME:(PROD) 2009-07-11 09:00:14.292
    *** SESSION ID:(1093.6077) 2009-07-11 09:00:14.292
    psdgbt: bind csid (31) does not match session csid (871)
    psdgbt: session charset is UTF8
    *** 2009-07-11 09:00:14.298
    ksedmp: internal or fatal error
    Current SQL statement for this session:
    begin SELECT Version INTO :Result FROM SYS.PRODUCT_COMPONENT_VERSION WHERE Upper(Product) LIKE '%ORACLE%'; end;
    ----- Call Stack Trace -----
    calling call entry argument values in hex
    location type point (? means dubious value)
    ksedst()+64 call ksedst1() 000000000 ? 000000001 ?
    ksedmp()+2176 call ksedst() 000000000 ?
    C000000000000C9F ?
    4000000003250320 ?
    000000000 ? 000000000 ?
    000000000 ?
    $cold_psdgbtds_debu call ksedmp() 000000001 ?
    g_support()+272 9FFFFFFFFFFEA750 ?
    60000000000B5C78 ?
    9FFFFFFFFFFEAD20 ?
    C00000000000040D ?
    psdgbt()+1680 call $cold_psdgbtds_debu C000000000000D1F ?
    g_support() 0000005D8 ? 00000001F ?
    C00000008B5545F0 ?
    4000000002F2BB50 ?
    ph2gbi()+608 call psdgbt() 9FFFFFFFFFFED850 ?
    9FFFFFFFFFFED7A0 ?
    00000009B ? 000000001 ?
    4000000001BAB1C0 ?
    9FFFFFFFFFFEAE10 ?
    60000000000B5C78 ?
    C000000000000CA1 ?
    ph2dr2()+864 call ph2gbi() 9FFFFFFFFFFEB360 ?
    000020003 ?
    60000000000B5C78 ?
    C000000000000818 ?
    400000000A9F4A30 ?
    00001B85D ?
    9FFFFFFFFFFED7A0 ?
    60000000000B9D88 ?
    ph2drv()+432 call ph2dr2() 9FFFFFFFFFFEB360 ?
    9FFFFFFFFFFEB38C ?
    9FFFFFFFFFFED790 ?
    60000000000B5C78 ?
    C000000000000A9D ?
    4000000002F64230 ?
    00001B81D ? 000000000 ?
    phpsem()+96 call ph2drv() 6000000000032418 ?
    C0000000BE445818 ?
    9FFFFFFFFFFED790 ?
    9FFFFFFFFFFEB300 ?
    60000000000B5C78 ?
    9FFFFFFFFFFEB950 ?
    C000000000000512 ?
    400000000A835FA0 ?
    and this goes on for thousands of lines
    Regards

    Hi
    I feel that there was some query that was executed from toad and that session is no longer available to check.
    Unix process pid: 28047, image: oracle@prd-db1
    *** ACTION NAME:() 2009-07-11 09:00:14.292
    *** MODULE NAME:(*C:\Documents and Settings\All Users\Start Menu\P*) 2009-07-11 09:00:14.292
    *** SERVICE NAME:(PROD) 2009-07-11 09:00:14.292
    *** SESSION ID:(1093.6077) 2009-07-11 09:00:14.292
    psdgbt: bind csid (31) does not match session csid (871)
    psdgbt: session charset is UTF8
    Regards

  • Trc files:  can they be deleted?

    Our Oracle 10 installation has hundreds of *.trc files in its /dbdump directory. These files go back about two years. I would like to remove any that are not necessary for the day to day operation of the db.
    Is it safe to remove any or all of these?
    Thanks,
    -=beeky

    yes its very safe to remove them...in our env we remove all the trace files 15 days old..
    find /u02/app/oracle/admin/CHIOPSWN/bdump -mtime +15 -print
    to find the files that are older than 30 days
    find $PWD -mtime +15 -print
    same thing like above,if u r in the same directory
    find /u02/app/oracle/admin/CHIOPSWN/bdump -mtime +15 -exec rm -r {} \;
    to remove the files
    find $PWD -mtime +15 -exec rm -f {} \;
    same thing like above if u r in the same directory
    find / -name "*.inst*" -exec ls -ltr 2>/dev/null {} \;
    find with long listing
    CHANGE LOCATIONS/PWD respectively

  • Errors in emoms.trc file

    We are using 11g Grid Control and observing too many of these entries in emoms.trc file:
    XMLLoader8 E0_4325_79185.xml] ERROR loader.XMLLoader LoadFiles.792 - Error while processing E0_4325_79185.xml:
    java.sql.SQLException: ORA-20801: ECM load failed: Metadata does not exist for target type oracle_database and snapshot type streams_processes_count_item
    ORA-06512: at "SYSMAN.ECM_CT", line 324
    Console works fine, but want to know why/what these errors caused. Will this effect the system?
    Thanks,
    dba

    Hi,
    Generally these errors are observed when there are older version of agents talking to this OMS. But this will not cause any harm like crash to the OMS, but suggest to upgrade the older version of agents.
    Observations & Research:
    ===============
    The following bugs speak about these errors
    Bug 13070300 - ORA-20801: ECM LOAD FAILED: METADATA DOES NOT EXIST FOR TARGET TYPE ORACLE_DATAB
    BUG 12924161 - ORA-20801: ECM LOAD FAILED: METADATA DOES NOT EXIST FOR TARGET/SNAPSHOT TYPE
    The following document explains this as well:
    OMS emoms.trc files with errors ORA-20801: ECM load failed: Metadata does not exist (Doc ID 1441322.1)
    Best Regards,
    Venkat

  • Actively update linked files?

    Is it possible to force Illustrator to actively update linked files as they are saved in Photoshop? I'd like to place a PSD into AI, for Live Trace, then edit the PSD and see the updated Live Trace without clicking in Illustrator.
    AICS 5 or CC

    Yes, that's possible.
    Under preferences / File handling & Clipboard, there is a dropdown you can switch to: Automatically.
    https://www.dropbox.com/s/rx6yjktxkjkruqg/update-linked-files-illustrator-6.jpg
    Good luck!

  • Search for "A word or phrase in the file" in .trc files gives no result

    Hi
    When i Search through Explorer using search for "A word or phrase in the file" in .trc files I get no result.
    Probably because these files are of type "Microsoft Network Monitor Document".
    But I can open them using Notepad, so why can't I search for a word in these files?
    I there another way to search?
    Normally I ofcourse use Visual Admin. but there are a lot of trace files so it would be nice to do this kind of search
    br Peter

    Hi Peter,
    This is a known problem, see http://support.microsoft.com/kb/309173
    An alternative is to use other tools which search even faster than the slow MS search does.
    Hope it helps
    Detlev

  • Java error when running in Translate Utilty with .trc files

    Hello everybody,
    I have Developer Suite 10g with WebUtil 1.6.
    I have launched a form with trace option and Forms created a file forms_3528.trc in my trace folder.
    In order to visualize I tried to convert the .trc file into a .xml or .html file using the Translate Utility.
    My Oracle Developer Home is D:\DevSuiteHome_1
    Here there are the steps I followed in dos command window:
    D:\>SET CLASSPATH=D:\DevSuiteHome_1\forms\java\frmxlate.jar; D:\DevSuiteHome_1\forms\java\frmall.jar;D:\DevSuiteHome_1
    D:\> CD DevSuiteHome_1\forms\trace
    D:\>java oracle.forms.diagnostics.Xlate datafile=forms_3528.trc outputfile=myfile.xml outputclass=WriteOut
    Here it is the result I got:
    Exception in thread "main" java.lang.IndexOutOfBoundsException
    at java.io.FileInputStream.readBytes(Native Method)
    at java.io.FileInputStream.read(FileInputStream.java:194)
    at java.io.DataInputStream.read(DataInputStream.java:224)
    at oracle.forms.diagnostics.ReadDataFile.readStrAttribute(Unknown Source)
    at oracle.forms.diagnostics.ReadDataFile.preRead(Unknown Source)
    at oracle.forms.diagnostics.ReadDataFile.preReadEvent(Unknown Source)
    at oracle.forms.diagnostics.Xlate.process(Unknown Source)
    at oracle.forms.diagnostics.Xlate.main(Unknown Source)
    What does this mean? I'm not at all familiar with java...
    Please let me know if you consider I should post this question on a Java forum not a Forms one.
    Thank you
    daniela

    FYI Update: It appears that the Xlate utility is choking on events 3 (Error messages on the status bar) and/or 1 (Error during open form) that now occur in our custom login form since applying patch 3. Examining the xlate output in debug mode, after processing these events, the next EventID shows a value >20,000--clearly either the file data is corrupted or the input data stream becomes misaligned at that point.
    Therefore it is an issue between the trace and the xlate class, NOT patch 3 per se.
    The error itself has no effect to the user and we can work around the trace translation issue by simply excluding these events.

  • Trc file fills server

    Hello
    I have downloaded and installed SQL Developer 1.1.
    Unfortunately it has caused our server to crash twice in two days.
    Oracle writes a trc file see below
    Fri Dec 22 09:47:25 2006
    Errors in file d:\oracle\admin\oracle\udump\oracle_ora_3376.trc:
    ORA-00600: internal error code, arguments: [17182], [0x3480110], [], [], [], [], [], []
    Which went on to fill up the server disk. It also did this yesterday.
    I think it happens when I open the tree view and ask to see the DDL for a view.
    If you need more info please ask.
    Andrew.

    SQL Dev uses different queries and different methods to access the database than other tools. Namely, SQL through JDBC. That being said, neither of those can directly kill a database. There are no special tie-ins that allow low level access to the database processes or anything special like that, it is just plain old SQL. I have no doubt that your database crashed, and no doubt that something that SQL Developer ran that killed it, but the crash was from the database. If you can figure out what query crashed it, you can run it in SQL Plus and the database will probably crash from that too.
    There are a couple of threads on here regarding the performance of the tree. Basically, there are some queries that run to populate Other Users and Tables lists that are very slow under some circumstances and you are probably running into those. One of those queries could have run long enough to cause a crash on your server..although it seems amazing to me that a long running query could crash the database.
    There are a couple of hacks to fix this performance problem, but the simplest thing is to revert to an evaluation build of 1.1 or to go back to 1.0.
    Regardless of the above advice, I think that you should open up a service request with oracle. There may be some database parameters to tweak, patches to apply, or they may provide a patch for 1.1 that allows the tree to run more efficiently.
    Eric

  • JBO-25033: Activating state from File at id 557,452,805 failed?

    Hi,
    I used :
    request.getRequestDispatcher("LoginPage.jsp").forward(request,response);<<to send the user to the login page when the appserver shut down and restared while the client browser is still open and user try to do some action on page (create a new recode) but received following error:
    Error Message: oracle.jbo.JboSerializationException: JBO-25033: Activating state from File at id 557,452,805 failed. at oracle.jbo.server.FileSerializer.processFile(FileSerializer.java:145) at
    ## Detail 0 ## oracle.xml.parser.v2.XMLParseException: Start of root element expected. at oracle.xml.parser.v2.XMLError.flushErrors(XMLError.java:145) at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:292) at
    Any idea on this. Or would apprecited to share me your idea to know how to handel this situation

    Error Message: oracle.jbo.JboSerializationException: JBO-25033: Activating state from File at id 557,452,805 failed. at oracle.jbo.server.FileSerializer.processFile(FileSerializer.java:145) at Has the "serialization target" changed from DB to file between your calls? The ID indicates the previous serialization was done to DB as for files this should be a hexadecimal string (tempfile name).
    If you did set file-serializer in both cases, then could you verify that in your "tmp" or jbo.tmpdir directory, you do have a file of that name? If not somebody removed the file from the time the AM was checked in and the next time it's checked out.

Maybe you are looking for