Big File to IDOC - performance issue

Hi All,
I am trying to create scenario where I have a file with aproximately 10 000 rows. From each row I am creating one IDOC and want to send it to R/3. Interface looks fine - it is working, but it is killing XI box for some time and u cant access it.
Full scenario look like this
File -> BPM (for 1:n) -> IDOC
I tried to find some solutions for doing the workload smaller by splitting file to less lines (500rows per file) but then file adapter picks up all file and processed them in parallel. So this is new scenario:
BigFile -> XI -> File -> BPM(1:n) -> IDOC
I tried to put second file sender communication channel as EOIO but looks like this does not work - or messages from queue are processed to fast. When one message starts BPM another file message start to be processed.
Do You have any ideas on how to make it more responsive and less performance impact?
thanks in advance.
Dawid

Hi ;
Since mappings are processed by the J2EE Engine, the maximum available Java heap may be a limit-ing factor for the maximum document size the XI mapping service is able to process. Tests have shown that processing of XSLT mappings consumes up to 20 times the source document size (using identity mapping). The maximum available Java heap for 32bit JVMs is platform-dependent. Using 64bit JVM platforms is an option here.
Current maximum heap sizes – 32bit
OS
Maximum heap (GB)
Linux
2
Windows
1.2 – 1.4
The Java heap is limited by the heap limit of the process (may be limited by address space because operating system code or libraries may also be loaded within the same address space). Also, Java internal memory areas such as the permanent space for loading Java classes must fit into the same address space.
Java VM tuning is one of the most crucial tuning steps, especially for more complex scenarios. For information about setting baseline JVM parameters, see SAP Note 723909. You must also take plat-form-specific parameters into account (for example, JIT compiler settings). The impact of Garbage Collection (GC) behavior especially may become a critical issue. Overall GC times for the J2EE appli-cation should be well below 5%. For more information about GC behavior and settings, see also SAP Note 552522.
Specific to XI is the fact that you sometimes need to process large documents for mapping or when using signatures. This can lead to excessive memory usage on the Java side. Therefore, you must observe Garbage Collection and the available Java heap in order to evaluate performance and pre-vent OutOfMemory exceptions. Since XI mapping is processed by stateless session beans that are called using a JCo interface, this may lead to a reduction of parallel JCo server threads within the JCo RFC Provider service of a J2EE server node (you can compensate for this by adding J2EE server nodes).
Mudit

Similar Messages

  • File to Idoc - EOF issue.

    Hi,
       I am working on file to IDoc scenario. But facing a problem...
       The source file contains ..EOF.. as a last line in the file. Actually, there is no need to map it to any of the field in IDOC.  Just I have created a element in the source data type and ignoring it...
    But I am getting this following error message...
        Conversion of file content to XML failed at position 0: java.lang.Exception: ERROR consistency check in recordset structure validation (line no. 43: missing structure(s) in last recordset
      Please help me to resolve this issue...
      Thanks in advance!!!
    Regards,
    Vivek LR

    Hi,
    R u using content conversion in file adapter.Check ur content conversion parameter it is not able to convert the file to XML structure.
    wat do u mean by "I have created a element in the source data type and ignoring it... " I think u r not mapping it in mapping right?
    chirag

  • File To IDOC Scenario issue

    Hello Friends,
    I am facing an issue during File To IDOC Scenario.
    Sender side: Text file
    It contains data:
    name,surname,7894561230 i.e. phone no.
    My sendor data type is also desined in same way.
    I am facing given below wrror in SXMB_MONI.
      <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!--  Request Message Mapping
      -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>Application</SAP:Category>
      <SAP:Code area="MAPPING">EXCEPTION_DURING_EXECUTE</SAP:Code>
      <SAP:P1>com/sap/xi/tf/_mm_file_to_idoc_</SAP:P1>
      <SAP:P2>java.lang.NullPointerException</SAP:P2>
      <SAP:P3 />
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:ApplicationFaultMessage namespace="" />
      <SAP:Stack>java.lang.NullPointerException thrown during application mapping com/sap/xi/tf/_mm_file_to_idoc_:</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>
    In sendor file adaptor I am using 'File' as message protocol.
    Should I have to use 'File content Conversion'.
    I tested message mapping. It is working fine.
    Kindly suggest me for above error. How I can resolve it?
    Regards,
    Narendra

    It contains data:
    name,surname,7894561230 i.e. phone no.
    java.lang.NullPointerException thrown during application mapping com/sap/xi/tf/_mm_file_to_idocc
    In sendor file adaptor I am using 'File' as message protocol.
    Should I have to use 'File content Conversion'
    can you tell in what format is your source structure? I mean is it an XML or a CSV file?
    If it is a CSV file then you need FCC....but then in this case the error should have been thrown by the channel itself and the message wont have come till mapping step....
    If your source file is really in csv format then apply FCC....many blogs, references are readily available on SDN.
    Regards,
    Abhishek.

  • PI7.1 File to Idoc scenario issue.

    Hi,
    We recently had an upgrade from PI7 to PI7.1.
    When testing File to Idoc scenario ,we had an issue as below
    Trace level="1" type="T">*** START APPLICATION TRACE ***</Trace>
      <Trace level="1" type="T">Error during lookup - com.sap.aii.mapping.lookup.LookupException: Error when calling an adapter by using the communication channel GeneratedReceiverChannel_RFC (Party: , Service: ED2_200, Object ID: a8f14398a9993dccadbe983d401f693a) The channel with object ID a8f14398a9993dccadbe983d401f693a could not be found in the Integration Server Java Cache. Check if the channel exists in the Integration Builder Directory and execute a refresh of the Java Cache.</Trace>
      <Trace level="1" type="T">Error when parsing RFC Response - null</Trace>
      <Trace level="1" type="T">*** END APPLICATION TRACE ***</Trace>
    This was working fine before upggrade.
    Any changes required to be done in scenario?
    Pls let me know.
    Thanks,
    Srinivasa

    do a CPA cache refresh
    http://hostname:port/CPACache/refresh?mode=full
    also check if the object is visible in tcode SXI_CACHE..
    also refresh the SXI_CACHE

  • WILL BIG INDEX WILL CAUSE PERFORMANCE ISSUE?

    In an index table, if there are a lot of insert then data will grow and/or if the index is
    huge then can it really cause performance issue?
    Is there a document in metalink that says if index is 50% of data then we have to rebuild it? What are the basis and threshold of rebuilding index?

    A big index by itself won't cause a performance issue. There are other circumstances you should consider for the index.
    First of all, which kind of index are you talking about, there are several kind of indexes in Oracle. On the other hand, assuming you are talking about a regular B*Tree index, you should consider factors such as selectivity and cardinality. If the indexed column has evenly distributed values, then the index will be highly selective, and if the indexed column is highly skewed, in order for the index not to become a real bottleneck you should gather histograms, so selectivity can be calculated at execution time and in case the query retrieves a highly selective data range the index won't slow performance, otherwise a full table scan will be considered a best data access path.
    Rebuilding indexes is an operation performed when the index becomes invalid, or when migrating the index to a new tablespace, but not when you suspect the index has become 'fragmented' in this case you should use the Coalesce command. Oracle provides efficient algorithms to maintain the index balanced.
    ~ Madrid
    http://hrivera99.blogspot.com/

  • File - XI - Idoc : mapping issue

    Hello gurus,
    I have a mapping issue:
    I have a mapping between FICABillingNotification to FKK_EBS_DOC_TREE
    Some fields are mapped one to one, others a mapped to constants.
    But for some reason i don't know, not all fields get the right value.
    For example:
    Working:
    Constant 1 --> BEGIN
    Constant LS --> RCVPRT (Receiver Partner Type)
    Not working:
    Constant FILE-->RCVPOR (Receiver Port)
    I had an other value the first time i tried it, but now i changed it to FILE but this value is not used by the mapping.
    Any help would be welcome
    Thanks
    Thomas
    Edited by: Thomas Pary on May 28, 2008 3:36 PM

    This was allready done (sorry i didn't answered that question).
    But Ok now i see, that for this RCVPOR the value of the constant "FILE" is mapped, and i can see it in the payload in SXMB_MONI after the mapping. But it isn't populated to the IDoc field :-s
    Another problem is, that like the one said above, some of my values are not mapped :
    <FICAExternalBilling>
    --<DocumentHeader> mapped to EF1KK_EBS_DOC_HEADER
    <BillFromId>25001254</BillFromId> mapped to REF_DOC_NUMBER
    <PostingDate>20080520</PostingDate> mapped to POST_DATE
    <DocumentDate>20080520</DocumentDate> mapped to DOC_DATE
    <OriginTypeId>IV</OriginTypeId> mapped to EXT_DOC_TYPE
    <ObjectType/>
    <ObjectKey/>
    <PendingCommitmentGroupID/>
    <InvoiceReferenceID/>
    <DisputeDocumentReferenceID/>
    --</DocumentHeader>
    The value of OriginTypeId is not mapped, but the value of DocumentDate and PostingDate are mapped.

  • Flat file to idoc mapping issue

    Hi Gurus,
    i had a flat file format in sender side as below:
    H_ID     TYP_CODE    line_elemet      Quantity
    5896  STANDARD                       1.transmitter    1
    5896                  STANDARD       2.xxxxxxxxx           1
    5896                   STANDARD      3.yyyyyyyyy  2
    6895                  STANDARD       1.aaaaaaaaa        1
    9436                  STANDARD       1.bbbbbbbbb          4
    9436                  STANDARD       2.ggggggggg          3
    The above file need to be send to an Idoc.
    for which same header_ID values should create only one header segment, under that many a line segments should create as many line items appear on that header_id value.
    Here my query is how to suppress the repetetive header values to create onlyone header segment. And how to create as many line_item segments as the number of line items appear.
    points obvoius for the response
    Thanks in advance,
    Sekhar.

    About map problem I suggest youn to see the following link that are really helpful to improve knowledge about mapping:
    /people/sravya.talanki2/blog/2005/08/16/message-mapping-simplified--part-i
    /people/sravya.talanki2/blog/2005/12/08/message-mapping-simplified-150-part-ii
    I suggest you to see the following link to learn more about mapping:
    Mapping functionality in XI
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9202d890-0201-0010-1588-adb5e89a6638
    SAP Exchange Infrastructure - Graphical_Mapping
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/6658bd90-0201-0010-fbb6-afe25fb398d3
    SAP Exchange Infrastructure - Graphical Mapping Exercise
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/fd5ebd90-0201-0010-d697-91374d5b5190
    SAP Exchange Infrastructure - Graphical Mapping - Advanced
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/be05e290-0201-0010-e997-b6e55f9548dd
    SAP Exchange Infrastructure: Mapping Patterns - Understand Context Handling in Message Mapping - Webinar Powerpoint
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f59730fa-0901-0010-df97-c12f071f7d3b
    SAP NetWeaver Exchange Infrastructure Mapping Troubleshooting - Webinar Powerpoint
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e01e9400-9e81-2910-20a5-a862945a5e98
    Mapping Lookups a RFC API
    Mapping lookups - RFC API
    XI 3.0 New Mapping Features
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/8a57d190-0201-0010-9e87-d8f327e1dba7
    I hope these links will be usefull.
    Regards,
    Salvatore

  • IDoc Performance Issue

    Hi,
    I have strange issue in the IDoc deliviry to the R3 systems.
    We have a IDoc created every 30 minutes and the deliviried to R3 systems.
    Because of some systems issues 2 IDocs that were created at 30 minutes interval was delivired to the R3 system at the same time, which should not happen.This has caused a data conflict.
    Is there any mechaninm that can be incorporated to avoid dispatching of 2 or more Idocs of same type at the same time.

    Hi
    At Idoc  delivary use  time stamp to Idoc name. Use the system time.
    Thnks
    Venkat Anil

  • Help Required - File to Proxy (Performance Issue)

    Hi All,
         One of my file to proxy scenario is taking 3 to 4 days to execute.
    Basically XI picks up a file of 2-3 lakh records and pushing it to SAP via ABAP proxy. On the ABAP side, a BDC call is made to process the data. But the whole scenario takes 3 to 4 days for execution. 
    The scenario is an Asynch scenario and BPM is not used as its a very straight forward scenario.. Also the file can't be splitted as say each 10,000 records bcoz all of these records are interrelated and has to go to SAP end in a single shot.
    Is there anything which can be done on either XI or ABAP side to optimize the scenario?.
    Thanks,
    Joe.

    Joe,
    Can you give more details?
    Is this an Asynch call or a synch call?
    Are you using a BPM, etc, maybe there is something wrong in the way you have designed your interface.
    Proxies are supposed to provide the best perfromance and the fact that it is taking such a long time is really strange, and so maybe if you can give us details on your interface , some reason for this issue can be found out.
    Meanwhile also look into this guide,
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/70ada5ef-0201-0010-1f8b-c935e444b0ad
    Regards
    Bhavesh

  • Download XMl file from SFTP: performance issue

    Hi All,
    I am downloading an XML file of size almost 15-20 MB from an SFTP using winSCP.
    A C# code is written in SSIS script component which loops the remote directory to find the recent file then it downloads it. This process is taking almost 20 minutes to run in SSIS.
    Could anyone please suggest me the optimized solution.
    few keynotes: 1> on SFTP, there will be always more number of files, say 50-60.
                         2> file size may grow with time
                         3> comparing filename to find latest file because filename is suffixed with timestamp e.g. filename_YYYYMMDD.xml
    below is my C# code used in script component
    public void Main()
    string hostName = (string)Dts.Variables["HostName"].Value;
    string userName = (string)Dts.Variables["UserName"].Value;
    string password = (string)Dts.Variables["Password"].Value;
    string sshHostKeyFingerprint = (string)Dts.Variables["SshHostKeyFingerprint"].Value;
    string winscpexecutablePath = (string)Dts.Variables["winscpExecutablePath"].Value;
    string localOutPath = (string)Dts.Variables["User::localOutPath"].Value;
    string remoteDirectory = (string)Dts.Variables["User::ftpRemoteDirectory"].Value;
    string latestFileName = null;
    List<DateTime> fileDate = new List<DateTime>();
    Dictionary<DateTime, string> dicfiledate = new Dictionary<DateTime, string>();
    // Setup session options
    SessionOptions sessionOptions = new SessionOptions
    Protocol = Protocol.Sftp,
    HostName = hostName,
    UserName = userName,
    Password = password,
    SshHostKeyFingerprint = sshHostKeyFingerprint
    try
    using (Session session = new Session())
    session.ExecutablePath = winscpexecutablePath;
    session.Open(sessionOptions);
    RemoteDirectoryInfo directory = session.ListDirectory(remoteDirectory);
    RemoteFileInfo[] fileInfo = new RemoteFileInfo[directory.Files.Count];
    if (fileInfo.Length <= 0)
    Dts.Variables["User::isFileExist"].Value = false;
    else
    Dts.Variables["User::isFileExist"].Value = true;
    List<string> lstFileNames = new List<string>();
    for (int i = 0; i < directory.Files.Count; i++)
    lstFileNames.Add(directory.Files[i].Name);
    Dictionary<DateTime, int> dictFinal = new Dictionary<DateTime, int>();
    for (int i = 0; i < lstFileNames.Count; i++)
    if (lstFileNames[i].StartsWith("metrics_"))
    int year = Convert.ToInt32(lstFileNames[i].Substring(8, 4));
    int month = Convert.ToInt32(lstFileNames[i].Substring(12, 2));
    int date = Convert.ToInt32(lstFileNames[i].Substring(14, 2));
    dictFinal.Add(new DateTime(year, month, date), i);
    var sortedDateTime = dictFinal.Keys.OrderByDescending(x => x);
    int latestFileIndex = dictFinal[Convert.ToDateTime(sortedDateTime.First())];
    latestFileName = lstFileNames[latestFileIndex];
    // files transfer mode
    TransferOptions transferOptions = new TransferOptions();
    transferOptions.TransferMode = TransferMode.Binary;
    TransferOperationResult transferResult;
    transferResult = session.GetFiles(remoteDirectory + latestFileName, @"" + localOutPath + @"\", false, transferOptions);
    transferResult.Check();
    // Print results
    bool fireAgain = false;
    foreach (TransferEventArgs transfer in transferResult.Transfers)
    Dts.Events.FireInformation(0, null,
    string.Format("Download of {0} succeeded", transfer.FileName),
    null, 0, ref fireAgain);
    Dts.TaskResult = (int)DTSExecResult.Success;
    catch (Exception e)
    Dts.Events.FireError(0, null,
    string.Format("Error when using WinSCP to Download files: {0}", e),
    null, 0);
    Dts.TaskResult = (int)DTSExecResult.Failure;

    Hi Rahul,
    Is it possible for you to get the latest file by comparing the CreationTime of these files other than comparing the file names? If so, you can try the code in Reza’s blog:
    http://www.rad.pasfu.com/index.php?/archives/30-Find-Last-Created-File-in-Special-Directory-SSIS.html 
    Alternatively, maybe you can try a free third party SFTP Task available on the CodePlex:
    http://ssissftp.codeplex.com/ 
    Regards,
    Mike Yin
    TechNet Community Support

  • Performance Issue OnSingle Database

    Hi,
    I have a performance issue on a single database on a SQL Server that has 32 databases. All other databases appear OK.
    How do I begin investigating this?
    I'm running SQL Server 2008 R2.
    Regards
    Paul

    Hi Paul
    As you said you are facing performace issue only in one database. So possibility is either on other database nothing big is running causing performance issue or this issue is related to database only.
    1 Do a quick health check blocking , high cpu , low memory.
    2. If all good find worst prforming queries on db (you can get n no. of script on web for that)
    3. If you found specific query is slow, Check its execution plan and find all indexes use , rebuild them (stats will automatically updated)
    4. In query check if any scan is there , u can use index to turn into seek.
    Let us know if this doesnt help. Also please check error log if some specific error is logged ther
    Thanks Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem

  • Performance issues - Log file parallel write

    Hi there,
    Since a few months I have big performance issues with my Oracle 11.2.0.1.0.
    If I look in the Enterprise manager (in blocking sessions) I see al lot of "log file paralles writes" and a lot of "log file sync" .
    We have configured an active data guard environment and are using ASM.
    We are not stressing out the database with heavy queries or commits or something, but sometimes during the day this happens on non specific times...
    We've investigated everything (performance to SAN / heavy queries / oracle problems etc etc) and we really don't know what to do anymore so i thought.. let's try a post on the Forum.....
    Perhaps someone had similar things?
    Thanks,
    BR
    Mark

    mwevromans wrote:
    See blow a tail of alertlog.
    Tue Apr 24 15:12:17 2012
    Thread 1 cannot allocate new log, sequence 194085
    Checkpoint not complete
    Current log# 1 seq# 194084 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
    Current log# 1 seq# 194084 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
    LGWR: Standby redo logfile selected to archive thread 1 sequence 194085
    LGWR: Standby redo logfile selected for thread 1 sequence 194085 for destination LOG_ARCHIVE_DEST_2
    Thread 1 advanced to log sequence 194085 (LGWR switch)
    Current log# 2 seq# 194085 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
    Current log# 2 seq# 194085 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
    Tue Apr 24 15:12:21 2012
    Archived Log entry 388061 added for thread 1 sequence 194084 ID 0x90d7aa62 dest 1:
    Tue Apr 24 15:14:09 2012
    Thread 1 cannot allocate new log, sequence 194086
    Checkpoint not complete
    Current log# 2 seq# 194085 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
    Current log# 2 seq# 194085 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
    LGWR: Standby redo logfile selected to archive thread 1 sequence 194086
    LGWR: Standby redo logfile selected for thread 1 sequence 194086 for destination LOG_ARCHIVE_DEST_2
    Thread 1 advanced to log sequence 194086 (LGWR switch)
    Current log# 3 seq# 194086 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
    Current log# 3 seq# 194086 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
    Tue Apr 24 15:14:14 2012
    Archived Log entry 388063 added for thread 1 sequence 194085 ID 0x90d7aa62 dest 1:
    Tue Apr 24 15:16:46 2012
    Thread 1 cannot allocate new log, sequence 194087
    Checkpoint not complete
    Current log# 3 seq# 194086 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
    Current log# 3 seq# 194086 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
    Thread 1 cannot allocate new log, sequence 194087
    Private strand flush not complete
    Current log# 3 seq# 194086 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
    Current log# 3 seq# 194086 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
    LGWR: Standby redo logfile selected to archive thread 1 sequence 194087
    LGWR: Standby redo logfile selected for thread 1 sequence 194087 for destination LOG_ARCHIVE_DEST_2
    Thread 1 advanced to log sequence 194087 (LGWR switch)
    Current log# 1 seq# 194087 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
    Current log# 1 seq# 194087 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
    Tue Apr 24 15:16:54 2012
    Archived Log entry 388065 added for thread 1 sequence 194086 ID 0x90d7aa62 dest 1:
    Tue Apr 24 15:18:59 2012
    Thread 1 cannot allocate new log, sequence 194088
    Checkpoint not complete
    Current log# 1 seq# 194087 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
    Current log# 1 seq# 194087 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
    Thread 1 cannot allocate new log, sequence 194088
    Private strand flush not complete
    Current log# 1 seq# 194087 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
    Current log# 1 seq# 194087 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
    LGWR: Standby redo logfile selected to archive thread 1 sequence 194088
    LGWR: Standby redo logfile selected for thread 1 sequence 194088 for destination LOG_ARCHIVE_DEST_2
    Thread 1 advanced to log sequence 194088 (LGWR switch)
    Current log# 2 seq# 194088 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
    Current log# 2 seq# 194088 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
    Tue Apr 24 15:19:06 2012
    Archived Log entry 388067 added for thread 1 sequence 194087 ID 0x90d7aa62 dest 1:
    Tue Apr 24 15:22:00 2012
    Thread 1 cannot allocate new log, sequence 194089
    Checkpoint not complete
    Current log# 2 seq# 194088 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
    Current log# 2 seq# 194088 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
    Thread 1 cannot allocate new log, sequence 194089
    Private strand flush not complete
    Current log# 2 seq# 194088 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
    Current log# 2 seq# 194088 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
    LGWR: Standby redo logfile selected to archive thread 1 sequence 194089
    LGWR: Standby redo logfile selected for thread 1 sequence 194089 for destination LOG_ARCHIVE_DEST_2
    Thread 1 advanced to log sequence 194089 (LGWR switch)
    Current log# 3 seq# 194089 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
    Current log# 3 seq# 194089 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
    Tue Apr 24 15:19:06 2012
    Archived Log entry 388069 added for thread 1 sequence 194088 ID 0x90d7aa62 dest 1:Hi
    1st switch time ==> Tue Apr 24 15:18:59 2012
    2nd switch time ==> Tue Apr 24 15:19:06 2012
    3rd switch time ==> Tue Apr 24 15:19:06 2012
    Redo log file switch has good impact on the performance of the database. Frequent log switches may lead to the slowness of the database . Oracle documents suggests to resize the redolog files so that log switches happen more like every 15-30 min (roughly depending on the architecture and recovery requirements).
    AS i check the alertlog file and find that the log are switchinh very fequent which is one of the reason that you are getting checkpoint  not complete message . i have face this issue many times and i generally increase the size of the logfile and set the archive_lag_time parameter as i have suggested above . If you further want to go root cause and more details then above guys will help you more because i don't have much experience in database tunning . If you looking for aworkarounf then you must go through it .
    Good Luck
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Photoshop CC slow in performance on big files

    Hello there!
    I've been using PS CS4 since release and upgraded to CS6 Master Collection last year.
    Since my OS broke down some weeks ago (RAM broke), i gave Photoshop CC a try. At the same time I moved in new rooms and couldnt get my hands on the DVD of my CS6 resting somewhere at home...
    So I tried CC.
    Right now im using it with some big files. Filesize is between 2GB and 7,5 GB max. (all PSB)
    Photoshop seem to run fast in the very beginning, but since a few days it's so unbelievable slow that I can't work properly.
    I wonder if it is caused by the growing files or some other issue with my machine.
    The files contain a large amount of layers and Masks, nearly 280 layers in the biggest file. (mostly with masks)
    The images are 50 x 70 cm big  @ 300dpi.
    When I try to make some brush-strokes on a layer-mask in the biggest file it takes 5-20 seconds for the brush to draw... I couldnt figure out why.
    And its not so much pretending on the brush-size as you may expect... even very small brushes (2-10 px) show this issue from time to time.
    Also switching on and off masks (gradient maps, selective color or leves) takes ages to be displayed, sometimes more than 3 or 4 seconds.
    The same with panning around in the picture, zooming in and out or moving layers.
    It's nearly impossible to work on these files in time.
    I've never seen this on CS6.
    Now I wonder if there's something wrong with PS or the OS. But: I've never been working with files this big before.
    In march I worked on some 5GB files with 150-200 layers in CS6, but it worked like a charm.
    SystemSpecs:
    I7 3930k (3,8 GHz)
    Asus P9X79 Deluxe
    64GB DDR3 1600Mhz Kingston HyperX
    GTX 570
    2x Corsair Force GT3 SSD
    Wacom Intous 5 m Touch (I have some issues with the touch from time to time)
    WIN 7 Ultimate 64
    all systemupdates
    newest drivers
    PS CC
    System and PS are running on the first SSD, scratch is on the second. Both are set to be used by PS.
    RAM is allocated by 79% to PS, cache is set to 5 or 6, protocol-objects are set to 70. I also tried different cache-sizes from 128k to 1024k, but it didn't help a lot.
    When I open the largest file, PS takes 20-23 GB of RAM.
    Any suggestions?
    best,
    moslye

    Is it just slow drawing, or is actual computation (image size, rotate, GBlur, etc.) also slow?
    If the slowdown is drawing, then the most likely culprit would be the video card driver. Update your driver from the GPU maker's website.
    If the computation slows down, then something is interfering with Photoshop. We've seen some third party plugins, and some antivirus software cause slowdowns over time.

  • IDOC creation issue at XI side - File to IDOC

    Hi Everybody,
    I am working on a File to IDOC scenario (XI 3.0).
    We have to create Orders in the R/3 System using this Interface.
    Mapping used is Java.
    I am using 2 Classes : Group and SendIdoc one below the other in the Interface Mapping:
    Firstly by using Group Class I am Grouping the Records in the file and then creating the IDOCs by using the next class(SendIdoc).
    In the production environment if we drop like 20 files at a time in the source NFS folder the number of IDOCs created are wrong.
    Say for example, for the first file dropped expected IDOCs are 10 but only 8 are created in SXMB_MONI .However in the input payload (in SXMB_MONI) the entire text file is read.
    When we put the files sequentially say one after the other is processed, the number of IDOCs created are fine.
    This is very strange problem.
    I tried changing the polling interval of the channel but was of no help.
    Any parameters we need to set in the channel or XI system to overcome this issue?
    Can someone help me out on this?
    Helpful answers will be rewarded points.
    Thanks & Regards,
    RK

    Hi ,
        When you placing files sequentially its working fine menas there is no problem with Java Code,one morething use synchronize methods or thread programming in Java maps is not advisable.
    as per my knowledge File adapter picking up the recodrs when its giving inout to Java map some thing misisng....
    for one file you need to create multiple IDocs try use QOS EOIO or if its not working then try to use record set per message one like that,let me know.
    Regards,
    Raj

  • Table size is too big Performance issue.

    Hi,
    Let us assume that we have table which has about 160 columns in it. About 120 of these columns are Varchar data type with about 100-3000 size each column.
    This table also has about 2 Millions Rows in it. I am not sure if these are considered as big sized tables?
    Does tables like these a good representation of data. I am in doubt as the size of the table is very big and might take long time for queries. We have about 10 indexes on this table.
    What kind of precautions have to be taken when these kind of tables are involved in the database and they required for the application.
    Database version is Oracle 10.2.0.4
    i know the question is bit vague, but i am just wondering what needs to be done , and from where i have to start to dig into the issue just in case I get performance issues while trying to select the data or update the data.
    i also want to know if there is any idle size for the tables and any thing that is more than that needs to be treated differently.
    Thanking you
    Rocky

    Any table with more than about 50 columns should be viewed with suspicion. That doesn't mean that there aren't appropriate uses for tables with 120 or 220 columns but it does mean they are reasonably rare.
    What does bother me about your first paragraph is the number of text columns with sizes up to 3K. This is highly indicative of a bad design. One thing is for sure ... no one is writing a report and printing it on anything smaller than a plotter.
    2M rows is small by almost any definition so I wouldn't worry about it. Partitioning is an option but only if partition pruning will can be demonstrated to work with your queries and we haven't seen any of them nor would we have any idea what you might use as a partition key or the type of partitioning so any intelligent discussion of this option would require far more information from you.
    There are no precautions that relate to anything you have written. You've told us nothing about security, usage, transaction volumes, or anything else important to such a consideration.
    What needs to be done, going forward, is for someone that understands normalization to look at this table, examine the business rules, examine the purpose to which it will be put, and most importantly the reports and outputs that will be generated against it, and either justify or change the design. Then with an assessment of the table completed ... you need to run SQL and examine the plans generated using DBMS_XPLAN and timing as compared to your Service Level Agreement (SLA) with the system's customers.

Maybe you are looking for

  • Pdf file increases when printing from email attachment

    We are using Windows XP and acrobat 7.0 professional. We are receiving files with a PDF attachment which is only about 1 MB When we send to the printer it then says the file is now approx 36mb and therefore crashes the printer and we can't print the

  • No Option to Turn on 4G on iPhone 5

    I have an iPhone 5 on THREE UK network, ok I know 4G/LTE isn't available on THREE UK yet but I would have thought the option would have been there, albeit greyed out or something, but it isn't.  What happens when THREE UK  do roll this out later this

  • My homepage keeps changing without my permission, it changes often which worries me.

    Every now and then my homepage changes to first something along the lines that Safe Search is on.. and now it changed to seek.mk, this worries me as I do not feel I have any viruses. I'm asking why it does that and if I should be worried.

  • Corrupt log file, but how does db keep working?

    We recently had a fairly devastating outage involving a hard drive failure, but are a little mystified about the mechanics of what went on with berkeleydb which I hope someone here can clear up. A hard drive running a production instance failed becau

  • Windows 2008 R2 DHCP management pack

    i got the windows 2008 R2 DHCP management pack installed but one of my dhcp server is 2008. Will this mp monitor 2008 DHCp server?  one of my dhcp cluster went down and service was stopped but there was no alert . what is the problem?