Trace file of size 500MB getting created every hour

Hello,
bdump has cdmp* files and the mount point is getting 100% full.
Environment :- RAC
FCBF0DBF:0DA4DD4C 26 738 10401 71 KSXPWAIT: Send compl suppressed and No requests. proc 0xc00000033c005fb8 haswrk 0
FCBF0E59:0DA4DD51 26 738 10005 2 KSL WAIT END [DFS lock handle] 1413545989/0x54410005 3/0x3 263/0x107 time=432
FCBF0E59:0DA4DD52 26 738 10005 3 KSL POST RCVD poster=10 loc='kjata: wake up enqueue owner' id1=0 id2=0 name= type=0 fac#=3 facpost=1
FCBF0E5A:0DA4DD53 26 738 10706 5 0x0000000000000000 0x0000000000000021 0x00000000000001DB
FCBF0E5C:0DA4DD54 26 738 10706 4 0x000000000000003A 0x0000000000000003 0x0000000000000108 0x0000000000000000 0x0000000000000005 0x0000000000000000
FCBF0E65:0DA4DD56 26 738 10401 1 KSXPVSND: client 2 tid(2,257,0x4b70726c) buf 0xc000000379bd1398 sz 128
FCBF0E76:0DA4DD57 26 738 10005 1 KSL WAIT BEG [DFS lock handle] 1413545989/0x54410005 3/0x3 264/0x108
FCBF0F30:0DA4DD59 26 738 10401 66 KSXP_SND_CALLBACK: request 0x0x9ffffffffd3a13c8, status 30
4:40:37 PM FCBF0B76:0DA4DD3B 26 738 10401 71 KSXPWAIT: Send compl suppressed and No requests. proc 0xc00000033c005fb8 haswrk 0
FCBF0C7B:0DA4DD42 26 738 10005 2 KSL WAIT END [DFS lock handle] 1413545989/0x54410005 3/0x3 262/0x106 time=412
FCBF0C7B:0DA4DD43 26 738 10005 3 KSL POST RCVD poster=10 loc='kjata: wake up enqueue owner' id1=0 id2=0 name= type=0 fac#=3 facpost=1
FCBF0C7C:0DA4DD44 26 738 10706 5 0x0000000000000000 0x0000000000000021 0x00000000000001B8
FCBF0C7F:0DA4DD45 26 738 10706 4 0x000000000000003A 0x0000000000000003 0x0000000000000107 0x0000000000000000 0x0000000000000005 0x0000000000000000
FCBF0C88:0DA4DD46 26 738 10401 1 KSXPVSND: client 2 tid(2,257,0x4b70726c) buf 0xc000000379bd1398 sz 144
FCBF0CA9:0DA4DD47 26 738 10005 1 KSL WAIT BEG [DFS lock handle] 1413545989/0x54410005 3/0x3 263/0x107
FCBF0DBE:0DA4DD4B 26 738 10401 66 KSXP_SND_CALLBACK: request 0x0x9ffffffffd3a2f20, status 30 FCBF0DBF:0DA4DD4C 26 738 10401 71 KSXPWAIT: Send compl suppressed and No requests. proc 0xc00000033c005fb8 haswrk 0
FCBF0E59:0DA4DD51 26 738 10005 2 KSL WAIT END [DFS lock handle] 1413545989/0x54410005 3/0x3 263/0x107 time=432
FCBF0E59:0DA4DD52 26 738 10005 3 KSL POST RCVD poster=10 loc='kjata: wake up enqueue owner' id1=0 id2=0 name= type=0 fac#=3 facpost=1
FCBF0E5A:0DA4DD53 26 738 10706 5 0x0000000000000000 0x0000000000000021 0x00000000000001DB
FCBF0E5C:0DA4DD54 26 738 10706 4 0x000000000000003A 0x0000000000000003 0x0000000000000108 0x0000000000000000 0x0000000000000005 0x0000000000000000
FCBF0E65:0DA4DD56 26 738 10401 1 KSXPVSND: client 2 tid(2,257,0x4b70726c) buf 0xc000000379bd1398 sz 128
FCBF0E76:0DA4DD57 26 738 10005 1 KSL WAIT BEG [DFS lock handle] 1413545989/0x54410005 3/0x3 264/0x108
FCBF0F30:0DA4DD59 26 738 10401 66 KSXP_SND_CALLBACK: request 0x0x9ffffffffd3a13c8, status 30 FCBF0F31:0DA4DD5A 26 738 10401 71 KSXPWAIT: Send compl suppressed and No requests. proc 0xc00000033c005fb8 haswrk 0
FCBF0FE8:0DA4DD5D 26 738 10005 2 KSL WAIT END [DFS lock handle] 1413545989/0x54410005 3/0x3 264/0x108 time=369
FCBF0FE9:0DA4DD5E 26 738 10005 3 KSL POST RCVD poster=10 loc='kjata: wake up enqueue owner' id1=0 id2=0 name= type=0 fac#=3 facpost=1
FCBF0FE9:0DA4DD5F 26 738 10706 5 0x0000000000000000 0x0000000000000021 0x000000000000018D
FCBF0FED:0DA4DD61 26 738 10706 4 0x000000000000003A 0x0000000000000003 0x0000000000000109 0x0000000000000000 0x0000000000000005 0x0000000000000000
FCBF0FF4:0DA4DD62 26 738 10401 1 KSXPVSND: client 2 tid(2,257,0x4b70726c) buf 0xc000000379bd1398 sz 144
FCBF1006:0DA4DD64 26 738 10005 1 KSL WAIT BEG [DFS lock handle] 1413545989/0x54410005 3/0x3 265/0x109
FCBF10A1:0DA4DD68 26 738 10401 66 KSXP_SND_CALLBACK: request 0x0x9ffffffffd3a36f0, status 30
The issue seems to be "DFS lock handle" issue.
Your help would be much appreciated..
Thanks

.

Similar Messages

  • SSIS Package : Needs SQL Tables get refreshed every hour

    Hello,
    I have created a SSIS Package which includes,
    Step 1 : Using Execute SQL Task to truncate all the tables
    Step 2 : Data Flow Tasks to extract data from SharePoint lists and fill the SQL Tables for SSRS Reporting from that.
    Requirement is to refresh SQL Tables every hour. I can do this using SQL Server Agent Job to run SSIS Package.
    But, I am not sure is this the feasible way of do this, as we are truncating all the tables. There are around 10-15 tables. Database size is also not large.
    Is there any in SSIS to impose only changes done in the SharePoint list to SQL tables rather than truncating them and creating new ones?
    Thank you,
    Mittal.

    @Visakh,
    Yes I am getting Created,CreatedBy, Modified, ModifiedBy audit fields data from all the SharePoint List.
    So, can we just trust these fields and update tables accordingly ?
    And how can I do this using SSIS ? Is there any particular control I can use to compare the data and time and update the SQL Tables?
    Thank you,
    Mittal.
    Yes. Provided they have proper data thiat should work fine.
    Yes you can use it inside SSIS
    You can have a logic like below
    1. In execute sql task capture max date for the source table
    2. Use data flow task with source query having filter based on date field
    ie like
    WHERE Modified > ?
    And map parameter to maxdate
    3. add destination step to populate data
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Too many trace files apparently created on behalf of Health Monitor

    Every ~10 minutes I see two pairs of trc+trm files that appear to be nothing more than a nuisance. A nuisance because if you don't get rid of them, then commands such as "ls -lt | head" take too long to complete. Even after setting disablehealth_check=true, the files still keep coming. The only thing I can find is a correspondence between the rows in v$hm_run and these trace files.
    The RDBMS appears to be working fine and there's nothing in the alert log that would explain their existence. Having said that, I'll go ahead and admit that I don't these are false negatives but it sure looks like they are.
    Below is a sample of the two trace files in the 4 files that get created every 10 minutes. Thanks.
    Here is V11106_m001_8138.trc (including the preamble)
    Trace file /opt/oracle/diag/rdbms/v11/V11106/trace/V11106_m001_8138.trc
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /opt/oracle/product/11.1.0.6
    System name: Linux
    Node name: rhel01.dev.method-r.com
    Release: 2.6.18-92.1.13.el5xen
    Version: #1 SMP Thu Sep 4 04:20:55 EDT 2008
    Machine: i686
    Instance name: V11106
    Redo thread mounted by this instance: 1
    Oracle process number: 26
    Unix process pid: 8138, image: [email protected] (m001)
    *** 2009-04-03 00:31:36.650
    *** SESSION ID:(139.36) 2009-04-03 00:31:36.650
    *** CLIENT ID:() 2009-04-03 00:31:36.650
    *** SERVICE NAME:(SYS$BACKGROUND) 2009-04-03 00:31:36.650
    *** MODULE NAME:(MMON_SLAVE) 2009-04-03 00:31:36.650
    *** ACTION NAME:(DDE async action) 2009-04-03 00:31:36.650
    ========= Dump for error ORA 1110 (no incident) ========
    ----- DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -----
    Here is V11106_m000_8136.trc (minus the preamble)
    *** 2009-04-03 00:31:36.471
    *** SESSION ID:(139.34) 2009-04-03 00:31:36.471
    *** CLIENT ID:() 2009-04-03 00:31:36.471
    *** SERVICE NAME:(SYS$BACKGROUND) 2009-04-03 00:31:36.471
    *** MODULE NAME:(MMON_SLAVE) 2009-04-03 00:31:36.471
    *** ACTION NAME:(Monitor Tablespace Thresholds) 2009-04-03 00:31:36.471
    DDE rules only execution for: ORA 1110
    ----- START Event Driven Actions Dump ----
    ---- END Event Driven Actions Dump ----
    ----- START DDE Actions Dump -----
    ----- DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -----
    Successfully dispatched
    ----- (Action duration in csec: 1) -----
    ----- END DDE Actions Dump -----

    Oracle Database 11g by Sam R. Alapati, Charles Kim seems to be a better resource. Here's a link.
    http://books.google.com/books?id=14ZH0eZV6G8C&pg=PA60&lpg=PA60&dq=oracle+disable+adr&source=bl&ots=brbhVP05RD&sig=WpaASLcGzJHgBB8Q-RqHu0Efy3k&hl=en&ei=AybaSemkNJSwywWSptXuDg&sa=X&oi=book_result&ct=result&resnum=7#PPA81,M1
    When I list hm rows (from x$dbkrun I presume), it shows this detail (which repeats many, many times)
    HM RUN RECORD 1
    RUN_ID 184481
    RUN_NAME HM_RUN_184481
    CHECK_NAME DB Structure Integrity Check
    NAME_ID 2
    MODE 2
    START_TIME 2009-04-05 22:44:43.385054 -05:00
    RESUME_TIME <NULL>
    END_TIME 2009-04-05 22:44:43.718198 -05:00
    MODIFIED_TIME 2009-04-05 22:44:43.718198 -05:00
    TIMEOUT 0
    FLAGS 0
    STATUS 5
    SRC_INCIDENT_ID 0
    NUM_INCIDENTS 0
    ERR_NUMBER 0
    REPORT_FILE <NULL>
    The corresponding data from v$hm_run (and the view definition itself) show that status = 5 means 'COMPLETED'. The interesting thing is that RUN_MODE.RUN_MODE is 'REACTIVE'. This seems to say that it's not a proactive thing. But if something is reacting, then why is it showing no error (i.e., ERR_NUMBER = 0)?

  • File not getting created in a different server

    My requirement.
    I have written a code in the BI system and now need to write an empty file (say a.done)in a directory /interfaces of PI system.
    I wrote using open data set and close data set however the file (a.done) is not getting created in the PI system even though the directory /interfaces exist.
    When I give any directory of that of BI system ,the file(a.done) is getting created i.e the file is getting created on the same server and not on the different server.
    Is there any function module or any other way for the file to get generated in the PI system.
    Please explain with an example.
    Regards,
    Vish

    Try to use  search FTP in se37 or checkout the below FM
    CALL FUNCTION 'EPS_FTP_MPUT'
      EXPORTING
        RFC_DESTINATION            =
    *   FILE_MASK                  = ' '
    *   LOCAL_DIRECTORY            = ' '
    *   REMOTE_DIRECTORY           = ' '
    *   OVERWRITE_MODE             = ' '
    *   TEXT_MODE                  = ' '
    *   TRANSMISSION_MONITOR       = 'X'
    *   RECORDS_PER_TRANSFER       = 10
    *   MONITOR_TITLE              =
    *   MONITOR_TEXT1              =
    *   MONITOR_TEXT2              =
    *   PROGRESS_TEXT              =
    * IMPORTING
    *   LOCAL_DIRECTORY            =
    *   REMOTE_DIRECTORY           =
    *   LOCAL_SYSTEM_INFO          =
    *   REMOTE_SYSTEM_INFO         =
    * TABLES
    *   FILE_LIST                  =
    * EXCEPTIONS
    *   CONNECTION_FAILED          = 1
    *   INVALID_VERSION            = 2
    *   INVALID_ARGUMENTS          = 3
    *   GET_DIR_LIST_FAILED        = 4
    *   FILE_TRANSFER_FAILED       = 5
    *   STOPPED_BY_USER            = 6
    *   OTHERS                     = 7
    IF SY-SUBRC <> 0.
    * Implement suitable error handling here
    ENDIF.

  • Session gets created for every request.

    Hi ,
    I have two servlets where i set a object in the session in one servlet say servlet1.java and get in another servlet say servlet2.java.
    When i pass my session object from servlet1 to servlet 2 , servlet2 is supposed to get the arrtibute , but its always gives me a null value.
    I found out that a new session is getting created every time when i try to access servlet2.
    I tried to print the session object , and it clearly tells me that the object is differet from the one i set.
    Can anyone let me know why is this happening and how do i solve this .
    This happens for me in weblogic 8.1 setup.
    In my JBOSS set up with the same code it works fine.
    Thanks in advance.

    Sasikanth,
    Pardon me if I am stating the obvious, but according to your description the problem is with WebLogic. So did you try a WebLogic specific forum?
    Good Luck,
    Avi.

  • Capping dev_ms trace file size

    hi - i'm wondering if anyone knows a way to help.  we have been asked by SAP to run our msg server at an elevated trace level (trace level 3 - we set it from SMMS).  this writes out a huge amt of data to the dev_ms trace file.
    the default trace file (dev*) size, per the rdisp/TRACE_LOGGING param default is "on, 10m" (i.e. 10mb).
    that applies to "all" the dev* trace files obviously.
    w/ our elevated dev_ms trace set, it's overwrapping in ~3 min ...so between dev_ms.old and dev_ms, we never have more than ~10 min of logs kept at any one time.
    the pt of us running at this elevated dev_ms trace level is so we can capture (save off) the trace file and send to SAP the next time our msg server crashes.
    our SAP file system mountpoint /usr/sap/<SID> is limited in size...and by setting rdisp/TRACE_LOGGING value to a higher value it affects all the dev* files, not just the 1 file i really care about increasing the capped file size on (dev_ms).
    **QUESTION*:  does any one know a way i could keep dev_ms capped at a large value like 100mb yet keep all the other dev files at the normal 10mb default?  thanks in advance

    1.  Increase rdisp/TRACE_LOGGING to 100MB.
    2.  Set (SM51) > Select All Processes > Menu > Process > Trace > Active Components > Uncheck everything and set trace level to 1.
    3.  Menu > Process > Trace > Dispatcher > Change Trace Level > Set to 2
    Wouldn't this essentially just increase dev_ms to 100MB while leaving other dev* trace files to not log anything?

  • How to identify which trace file is your backup controlfile trace in udump

    I have a 10.2.0.3 database on unix.
    I want to setup a job to run a script every night to backup the controlfile as text format.
    such as "alter database backup controlfile to trace".
    How my script can identify which trace file is the one just created for backup controlfile and copy the file to a backup disk?
    Thanks a lot!

    I thought it's
    alter database backup controlfile to trace as 'absolute path of any file where you want the control file in clear text format ';
    for ex. in Windows..
    alter database backup controlfile to trace as *'c:\temp\create_ORCL_control.sql'*;
    This way you will know where your job is going to backup controlfile to trace

  • New weblogic cookie gets created - app is protected by SAM J2EE agent

    All
    We have installed J2EE agent 2.2 on Weblogic App server (8.1 SP6) fine..
    We get authenitcated against access manager while accessing a weblogic app, however it seems that
    we are going in a loop and a new weblogic session id is getting created every time
    We have the AMfilter as the 1st filter .. etc... We have deployed the SampleApp etc fine in the past
    We have checked the AMAgent.properties file etc..
    The weblogic app is in the same domain as our SAM server and the cookie domain is set fine...
    We can see that SAM cookie is used fine.. but it seems like the weblogic thinks its a new session
    and creates a new session cookie all the time
    Any ideas
    Thanks

    We got some new jar files for the agent and now the agent with the tomcat container is working as expected.

  • Trace files - 10gr2

    I am setting event trace using '10046 trace name context forever, level 12'; Sometimes trace files under /bdump are getting generated and sometime not. Why is that. I use the folloing to get the P_ID and I don't see any files with that name. Although I notice that there is a BG process with the OS ID. Should I need to use any alter system commands to generate the trace files after my event trace is done?
    shell10g::/d01/apps/oracle10/admin/dev/bdump> ps -ef | grep 1391
    oracle 5641 9871 0 14:10:49 pts/1 0:00 grep 1391
    oracle 1391 1 0 Oct 18 ? 151:57 ora_s002_dev
    shell10g::/d01/apps/oracle10/admin/dev/bdump>
    Thanks.

    Please see below. alert log says it created a trace file, but I don't see it. Why is that?
    Alert log
    Details in trace file /d01/apps/oracle10/admin/fdev88/bdump/fdev88_s002_1391.trc
    KGL object name :explain plan for
    SELECT
    AA.BUSINESS_UNIT,
    AA.DEPTID,
    shell10g::/d01/apps/oracle10/admin/fdev88/bdump> ls -lrt
    total 16
    -rw-r--r-- 1 oracle dba 2377 Oct 22 15:11 alert_fdev88.log
    shell10g::/d01/apps/oracle10/admin/fdev88/bdump>

  • Another techincal system for SRM is getting created in SLD

    Hello All,
    I have a problem with duplicate Technical System being created automatically in our SLD. Since, there is no business system assigned to that, I am deleting that every day though It keeps on getting created every day. I understand that there could be a job which does this periodically.
    Because of this, we encounter NO BUSINESS SYSTEM assigned error all the time. The work around is to delete that Duplicate Technical System in SLD.
    What I observed is that we have three s/w compenent versions imported for SAP APPL. It might be funny in asking, Is these multiple SWCs for SAP APPL has anything to do with this issue ? And I also imported SRM Server 7.01 SWCV, there is no other version for this SC.
    Please throw some light and give me some idea.
    Regards,
    Lakshman V.

    I also noticed that there is a difference in the Installation numbers of both the Technical Systems.
    Hi Lakshman,
    Which Technical system has got correct SRM installation number? Existing Technical system or new Technical system which is getting created eveyday?
    Scheduled SLD Data supplier job in SRM collects the data(SAP products, database parameters, hosts, clients, and so on) and send it to the SLD Server.

  • Computers downloading CAB files every hour on the hour

    Hello, I can see from my firewall logs that most every computer on my network tries to download via port 80, Microsoft CAB files starting at midnight and then every hour on the hour, all day.
    The sources seem to be Akamai sites: 64.129.104.174, 64.129.104.173, 64.129.104.150.
    The CAB files are in the form of:
    d5419256-6c9a-4ef5-bfe7-4eca1049d134.devicemetadata-ms
    b3004f87-774a-4cc6-9c51-89264da4ed5a.devicemetadata-ms  
    fa054a55-da98-402c-b079-26e093f5aa2e.devicemetadata-ms  
    I understand these look to be device metadata packages for some piece or pieces of hardware, but I'm wondering if this is a normal behavior, and why every machine is doing this?  Is this a form of Windows update?
    Thanks

    Hi,
    In order to analyze your issue better, if it's possible, please upload the whole Windows Firewall log file to OneDrive, and share the link here. The default path for the log is
    %windir%\system32\logfiles\firewall\pfirewall.log.
    In addition, these file is for your device. A driver package can install device metadata packages by copying them to the device metadata store. If you install some devices or update drivers, it will be downloaded. Make sure your port is opened. And
    then update all your drivers manually the check the result.
    If it doesn't help, located to this path:
    C:\Users\YOURNAME\AppData\Local\Microsoft\Device Metadata\dmrccache\en-US
    There you will see subfolders corresponding to your installed devices. Within each sub folder you'll see a folder named 'DeviceInformation'. Open it and let us know which devices has issue.
    Note: The GUID (like d5419256-6c9a-4ef5-bfe7-4eca1049d134) should be same as what you see in the log file.
    Karen Hu
    TechNet Community Support

  • Mail is delivered as soon as it arrives at gmail server instead of every hour as I requested

    I have set my mail preferences to get mail every hour. But mail arrives when it arrives. It can be every few minutes. This has been going on for a while (Not helpful, I know, but bear with me.)
    For the last few weeks I have been getting an exclamation mark next to one or the other of my gmail accounts with errors that range from 'non secure device' to ' too many connections'. Trawling through the gmail forum has led me to follow all of the steps they suggest but the 'too many connections' things won't go away.
    One point they make is that if you are 'getting' mail more frequently than every 10/15 minutes this can cause the problem.
    Now, if I could pinpoint the actual time when this phenomenon of mail appearing when it wants to began, we would have an idea of whether it was connected to the latest Yosemite update. Unfortunately, I can't.
    But I have trawled all through the account settings on google and cannot find any place where I set when gmail sends my mail so I reluctantly conclude that Apple Mail is ignoring my setting of 'check once an hour' and is checking at some randomly selected period.
    Does anyone else have this problem? Does anyone have any idea about how to fix it. I am fed up with going to gmail captcha and telling it again and again to accept this device; with signing in on gmail to the account in question; with this whole stupidity in general. Once I do all of the above, everything seems fine until the next mail arrives when the exclamation mark appears again and I have to go through the whole rigamarole again.
    Thanks for any help.

    Thanks for the detailed procedure, it helped a lot!

  • Unusuall TREX  Trace file getting created..

    Dear All
    we have installed TREX and  in TREX server its getting crating a tarace file TrexQueueServerAlert_myportalci.trc size of  more than 30GB..
    following is the extract of that file ..waht can be the reason for this huge trace file
    Regards
    Buddhike
    [2700] 2008-12-17 16:34:47.158 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [2700] 2008-12-17 16:34:47.158 e Qidx_publi QueueDocStore::hasDocument(01235) : : DocIDMissing
    [5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [4960] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [3772] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [6052] 2008-12-17 16:34:51.111 e Qidx_publi DocStateStore.cpp(00570) : DocStateStore::getDocument(UDIV): udiv: 756, result: 4501
    [6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::hasDocument(01235) : : DocIDMissing
    [6052] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [4960] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [6052] 2008-12-17 16:34:51.111 e Qidx_publi Queue.cpp(04093) : Queue::preprocessMsg: preprocessing doc:  not found
    [5296] 2008-12-17 16:34:51.111 e Qidx_publi DocStateStore.cpp(00570) : DocStateStore::getDocument(UDIV): udiv: 756, result: 4501
    [5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::hasDocument(01235) : : DocIDMissing
    [5296] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [3772] 2008-12-17 16:34:51.111 e Qidx_publi QueueDocStore::getDocID(01189) : : DocNotFound
    [5296] 2008-12-17 16:34:51.111 e Qidx_publi Queue.cpp(04093) : Queue::preprocessMsg: preprocessing doc:  not found
    [3772] 2008-12-17 16:34:51.111 e Qidx_publi DocStateStore.cpp(00570) : DocStateStore::getDocument(UDIV): udiv: 756,

    Dear Michell
    Thnx for your post..how can i change the trace levels in TREX..?
    which trace level should i keep?
    Regards
    Buddhike

  • Agent10g: Size of Management Agent Log and Trace Files get oversize ...

    Hi,
    I have the following problem:
    I had installed the EM Agent10g (v10.2.0.4) on each of all my Oracle servers. I've done a long time ago (a few months or a few years depending on wich server it was installed). Recently, I've got PERL error because the "trace" file of the Agent was too big (the emagent.trc was more than 1 Gb) !!!
    I don't know why. I checked on a particular server on the AGENT_HOME\sysman\config (Windows) for the emd.properties file.
    The following properties are specified in the emd.properties file:
    LogFileMaxSize=4096
    LogFileMaxRolls=4
    TrcFileMaxSize=4096
    TrcFileMaxRolls=4
    This file had never been modified (those properties correspond to the default value). It's the same situation for all Agent10g (setup) on all of the Oracle Server.
    Any idea ?
    NOTE: The Agent is stopped and started weekly ...
    Thank's
    Yves

    Why don't you truncate the trace file weekly. You can also delete the file. The file will be created automatically whenever there is a trace.

  • Empty files are getting created at receiver FTP server

    Hi Experts,
    I have an Idoc to File scenario where I am sending an XML file to receiver FTP server.
    Scenario is working fine but sometimes an empty file is getting generated at receiver FTP server.
    I have already selected ignore empty file at receiver channel so issue is not within PI system configuration.
    When I checked the message log I can see that almost all the files are getting created successfully without any issues, but
    for some files/messages I can see that there are below error logs.
    "Transmitting the message to endpoint <local> using connection IDoc_AAE_http://sap.com/xi/XI/System failed, due to: com.sap.engine.interfaces.messaging.api.exception.MessagingException: Could not get FTP connection from connection pool (1 connections) within 5,000 milliseconds; increase the number of available connections"
    "Exception caught by adapter framework: Could not get FTP connection from connection pool (1 connections) within 5,000 milliseconds; increase the number of available connections."
    And after there is again success message log in the same message and it creating a file successfully during that time stamp.
    but the third party is sometime receiving empty file which I am not able to find in any trace or log (my file name is SD_timestamp.xml).
    Can you please let me know what is the solution and what adjustments FTP server need to do in order to resolve this issue.
    Thanks in advance.
    Regards,
    Rahul Kulkarni

    The error you are getting that says "Could not get FTP connection from connection pool (1 connections) within 5,000 milliseconds; increase the number of available connections" has probably nothing to do with the empty files problem.
    I second Hareesh Gampa that you first should try "temporary file creation". You might also need to tell the FTP owner that he should only pick up files with that are written completely and that do comply with a negotiated file name schema. The temp file should have another schema of course then. He should not pick up just every file that is written. See here for details
    http://help.sap.de/saphelp_nw74/helpdata/en/44/6830e67f2a6d12e10000000a1553f6/content.htm
    HTH
    Cheers
    Jens

Maybe you are looking for

  • How to use media card

    I have been taking pictures and downloading music but they have ended up on my device memory and not on the card How do I use the media card properly?

  • Error acceessing remote EJB 2.0 from OSB

    Hi All, I am trying to access a remote EJB(2.0) from OSB. When i am trying to connect to the EJB i am getting this error. "java.net.ConnectException: Connection refused: connect; No available router to destination" I have built the WSDl based on the

  • Error Installation Netweaver R2 in HPUX with DB2 9_1

    Hi all I have problem after putting in the installation wizard of the path of the files exports Is this the information of log: INFO 2008-04-18 10:09:12 Authorizations set for /usr/sap/install/IXQ_CI/sapinst_instdir/NW04S/SYSTEM/DB6/CENTRAL/AS/last_c

  • ITunes default import setting

    Hi I am trying to set my iTunes default import setting to "Apple Lossless" with error correction, but each time I do this, iTunes switches back to iTunes Plus with no error detection.  Does anyone have any ideas how to solve this?  Thanks.

  • Plsql coding for multiplication table

    hai i need plsql coding for multiplication table can any one plzzzz.....