Perfomance problems TS F.13

Hi,
Anyone has applied the  Note 1410838 (SAPF124: Performance of clearing specific to ledger groups) and is not using new gl?
I get a dump when I run the TS F.13 for some accounts (the ones that have more than 1.000.000 items) because of performance problems and I found this note. In spite the note does not mention new gl, it talks about ledger groups (clearing specific to ledger groups for G/L accounts). I think ledger group is only for new gl, I do not know what is the note refers to 'ledger group', and if it is correct apply this note because I am not using new gl.
Any idea?
Thanks,
Cecilia

Hi Cecilla,
This note is for ECC 6. You can see the "Affected Releases" part of this OSS note to check the applicability of the OSS ntoe to a particular version.
Regards,
Gaurav

Similar Messages

  • Fetching materials on basis of Division creating perfomance problem

    Hi Experts,
    i am fetching Data from three table like
    select a~matnr a~extwg
    b~charg b~werks b~clabs b~cspem b~cinsm
    c~vfdat
    from mara as a join mchb as b
    on a~matnr = b~matnr join mch1 as c on b~matnr = c~matnr  and b~charg = c~charg
    into table  itab_main
    where  a~spart = p_spart and b~werks = p_werks
    and b~lgort = 'FG01' .
    it is creating perfomance problem
    so my questin is 'is there any index table for mara ?(like vrpma for vbak)'
    so we can get all the material on base of division fast.
    Regards,
    Alpesh
    Edited by: Alpesh on Jun 2, 2009 3:56 PM

    Hi Alpesh,
    How about throwing MARC into the mix. I notice that you are searching with plant, storage location and division. Table MARC has an index on plant named WRK.
    If you join MARA and MARC to first determine all the materials that qualify for the given plant and division combination, you can access table MCHB with material (MARA-MATNR), plant (MARC-WERKS) and storage location (FG01) i.e. the first 4 fields in the primary key of table MCHB.
    I could not test this theory out because in our system we do not have any data in table MCHB. Your select executes quickly on my system.
    Please let me know how this panned out.

  • RAID perfomance problem

    Hi! I'm using 2xWestern digital 80GB ATA100 7.2k rpm 8mb on raid 0. I'm using a modified BIOS to set the strip size at 16kb.
    The problem: perfomance.Sandra2003 scores 30000KB/s   X(  (my brother on a plain ata controller scores the same on a single 120GB unit). Persons who I've met get 42000  ;(
    Also the NTFS cluster size is currently at 32kb. Before it was at 4 kb but the scores didn't change.
    I'm using the downloaded drivers from live update.
    What do you think I should do?
    Can I change the strip size without loosing all the data on the drives? (I would have to delete the array and setup it again).

    Quote
    Originally posted by maesus
    1. Try with different strip size [backup ur data first]
    2. Use VIA Raid Performance Patch
    3. use PCI Latency Patch
    4. Ask that person who scored 42000 and copy his/her settings over. :D
    Reinstalling windows and all the programs I have now would take an entire day  
    I've installed both VIA patches. She uses a 32kb strip size and 64kb cluster size (that is very agressive since i'd lost 2GB).
    Do you know if there is a recent driver from promise (not from msi),and where do I download? (on the promise support site I don't know which model should I choose).

  • Perfomance problem with java stored procedure

    hi,
    i developped a java class, then I stored it in Oracle 8.1.7.
    This class contains several import of other classes stored in the database.
    It works, but the execution perfomances are disappointing. It's very long. I guess, that's because of the great number of classes to load that are necessary for my class execution.
    I tried to increase the size of the java pool (I parameter 70 Mo in the java_pool_size parameter of the init.ora), but the performance is not much better.
    Has anyone an idea to increase the performance of this execution of my class ?
    In particular, is there a way to keep permanently in memory the java objects used by my class ?
    Thanks in advance
    bye
    [email protected]
    null

    Hello again,
    I read the documentation of 9i and found some hints about different TCP/IP socket handling of 9i and a standard JVM.
    There are some parts that give hints but I couldn't find a sample code to show how to handle the differences.
    Could anyone tell me how to handle the differences or give some links to sample code?
    Thanks
    Detlev

  • Perfomance Problems with PSE13

    I stored my Picture files on a NAS System from WD (EX2), it is connected via 1Mbit LAN to an Router. After Upgrading vrom PSE12 to PSE13 the start of the Editor via Organizer is uneccaptable slow and it occurs 3-4 Minutes until the Editor is loaded. And it occurs 3 min. more until the Picture is loaded and can be edited. This is an Problem, that doesn't occurs with PSE12. The WLAN Connection from the Desktop to the Router is normal ca. 300Mbit and other programs are working fine.
    Can anyone help me - I m really not very amused and the only solution for me is the Deinstallation of PSE13 and the use of the older Version.

    Development is aware of this issue. We are currently verifying some changes we have made to circumvent the Hotsync timeout problem. The root cause is actually a bug in Palm Desktop, not Oracle9i Lite. Please contact Oracle Support EMEA for updated information

  • Oracle Express perfomance problem

    Hi, I am facing performance issues with following application in Oracle Express using OFA as front end. There are 9 dimensions in cube and one measure. DB size 18-20 GB. geography and time dimension is dense, all other are composite. Leaf level data for one month and one geography is 50k rows only. It is taking more than 1 hour to run following command.
    allstat
    lmt geography to 'G036'
    lmt time to 'DEC06'
    value=na
    upd
    I have to na the data before uploading new data. could someone pls suggest some better way to na the data, which should be faster?
    thanks very much.

    The fastest method would be to rename your geography and time dimension values
    to a temp value (to be removed on your next eif export/import)
    And then add them back and load ..
    ie
    mnt time rename 'DEC06' 'XDEC06'
    mnt geography rename 'G036' 'XG036'
    update
    mnt time add 'DEC06'
    mnt geography add 'G036'
    " NO Need TO NA Anything .. just load your data
    You could try deleting the members instead of the rename .. but this caused
    me some problems in the past .. so i found the rename to work better.
    Mike
    '

  • Perfomance problem after restoration

    hi
    When we tested the restore one database with a thirdparty tool the db seems slower.The selection is ok but the insertion is become slower.When i ma trying to do the import it is slower than before.How can i diagnosie where exactly the problem i am not finding any errors or warnings in logs or trace files.when i doing the export it is fast as before.This oracle 7.4.3 on solaris).We erased the os and done the diaster recovery by restoring the file systems,software and database but why this becomes so slow.There are 2 more database in same server.They working fine after this test.what may be the reason
    with regards
    ramya

    hi
    What i found is the logswitch becomes too slow ,i mean the frequency of logswitch when doing import is more Before it was while import 2 minutes now itis happening 14 minutes interval.How can i know where the problem exactly.If i recreate redologs and rollback segments whether it will fix?
    with regards
    ramya

  • Perfomance problems changingTransaction Attribute

    Hi,
    I am working with weblogic 7.0. All my EJB's Transaction Attribute value is defined
    as "Required". I would like to change this attribute's value to some of my query
    methods to "Never" or "Not Supported" value.
    Should this change improve the perfomance of these methods?.
    In the tests i have made instead of improving performance it has decreased.
    I know that in weblogic Server 7.0 exists an optimization to improve the performance
    of Remote Method Interface calls within the servers, which permits calling methods
    using pass by reference.
    Could this performance reduction previosly mentioned, by changing Transaction
    Attribute value to "Never" or "Not Supported" to methods, be related with this
    optimization included in weblogic ?
    Is this optimization available only when this attribute is set to "required"?
    Thank's

    Hi Davis,
    "David Catalán" <[email protected]> wrote in message
    news:3eedd1d5$[email protected]..
    I am working with weblogic 7.0. All my EJB's Transaction Attribute valueis defined
    as "Required". I would like to change this attribute's value to some of myquery
    methods to "Never" or "Not Supported" value.
    Should this change improve the performance of these methods?.No, it will not, al least for database operations, as it will be
    equivalent an autocommit mode which is more expensive as
    the database will have to do rapid fire one-statement transactions.
    In addition, it will involve container overhead connected with
    necessity to suspend and resume ongoing TXs.
    In the tests i have made instead of improving performance it hasdecreased.
    No surprise.
    I know that in weblogic Server 7.0 exists an optimization to improve theperformance
    of Remote Method Interface calls within the servers, which permits callingmethods
    using pass by reference.
    Could this performance reduction previosly mentioned, by changingTransaction
    Attribute value to "Never" or "Not Supported" to methods, be related withthis
    optimization included in weblogic ?No, the remote call optimization is done without regard
    to transaction attributes.
    Is this optimization available only when this attribute is set to"required"?
    No, it's not limited to "Required". It's available whenever an EJB and a
    client
    are packaged into one jar or ear.
    HTH.
    Regards,
    Slava Imeshev

  • Perfomance problem after upgrade from 9.2.0.6 to 9.2.0.8

    Hi,
    we have upgrade our RDBMS from 9.2.0.6 to 9.2.0.8, all our batch jobs became slow (plus 3hours on normal execution time of our query).
    any suggestions.
    Regards.
    tt

    I bet you just upgraded without any modification on parameters and/or way of gathering statistics. :)
    I know it's very embrassing and frustrating that Oracle sometimes gets slow after just upgrade.
    But it's designed to be that. Not to get slower, but to get faster.
    Enhancement is the only purpose of upgrade, isn't it?
    But there is always a chance that you hit some negative side effects of that enhancement.
    For instance, optimizer gets equipped with new powerful query transformation algorithm.
    But in very rare cases, it causes some queries got slower.
    It has always been happening and l believe it will also happen with 11g. :(
    The unfortuante thing is that there can be amazingly many reasons why Oracle gets slower after upgrade.
    You need to search metalink for similiar issues and check statistics and execution plans as stated above.

  • Perfomance problem

    Hello All,
    The below mentioned query is a part of one trigger. When i execute this query directly, it is is taking fraction of seconds, but when I am trying to save the data through oracle forms, which in turn invokes triggers and then this query, then it is
    taking time.
    I am putting trace on the session in which I am trying to save data
    through forms and then with the help of tkprof I see that this query
    is taking time.
    SELECT A.FPL_ID,A.PZUG_ID,A.PZV_KEY, A.EINBRUCHZEIT,A.STRECKE_ID,
    A.BST_REIHENFOLGE,COUNT(*),B.ZN FROM
    PV_BSTKETTE A,PPL_ZUG B WHERE (A.PZUG_ID = B.PZUG_ID ) GROUP BY A.FPL_ID, A.PZUG_ID,A.PZV_KEY, A.EINBRUCHZEIT,A.STRECKE_ID, A.BST_REIHENFOLGE,B.ZN HAVING COUNT(*) > 1
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 19 0.00 0.00 0 0 0 0
    Fetch 19 9.74 27.13 10910 274360 1045 0
    total 39 9.74 27.13 10910 274360 1045 0
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: 90 (recursive depth: 1)
    Rows Row Source Operation
    0 FILTER
    1570350 SORT GROUP BY
    1570331 NESTED LOOPS
    120593 TABLE ACCESS FULL PPL_ZUG
    1570331 TABLE ACCESS BY INDEX ROWID PV_BSTKETTE
    1690905 INDEX RANGE SCAN (object id 42088)
    Index on PPL_ZUG :
    INDEX_NAME UNIQUENES COLUMN_NAME
    FK1_PZUG_I NONUNIQUE FPL_ID
    PZUG_PK UNIQUE PZUG_ID
    PZUG_UK UNIQUE FPL_ID
    UNIQUE ZN
    Index on PV_BSTKETTE :
    INDEX_NAME UNIQUENES COLUMN_NAME
    PZVK_BSF_FK_I NONUNIQUE STRECKE_ID
    NONUNIQUE BST_REIHENFOLGE
    PZVK_BST_FK_I NONUNIQUE BST_ID
    PZVK_PK UNIQUE PZVK_ID
    PZVK_PZV_FK_I NONUNIQUE PZUG_ID
    NONUNIQUE PZV_KEY
    Will appreciate any pointer to reduce the time.
    Thanks, Raj

    call     count cpu      elapsed      disk     query      current    rows
    Parse       1  0.00      0.00        0          0         0          0
    Execute    19  0.00      0.00        0          0         0          0
    Fetch      19  9.74      27.13       10910      274360    1045       0
    total      39  9.74      27.13       10910      274360    1045       0
    call     count cpu       elapsed     disk     query       current   rows
    Parse        1  0.03    0.01         1          0          1         0
    Execute      1  0.00    0.00         0          0          0         0
    Fetch        1  0.67    12.58        1452       14445      55        0
    total        3  0.70    12.59        1453       14445     56         0
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 98
    See diffrence in IO from yours front end and now from sequel plus,cause yours
    above query from forms trigger is getting fired for each transaction why you need to
    reinvent the wheelSELECT owner
    FROM t
    WHERE owner='SYS'
    GROUP BY owner
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.25 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 2 0.00 0.03 13 15 0 1
    total 4 0.00 0.29 13 15 0 1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 57 (SCOTT)
    Rows Row Source Operation
    1 SORT GROUP BY NOSORT (cr=15 pr=13 pw=0 time=37817 us)
    3365 TABLE ACCESS FULL OBJ#(134026) (cr=15 pr=13 pw=0 time=38882 us)
    The same above query i call within PL/SQL block 19 times
    DECLARE
    l_owner t.owner%TYPE;
    BEGIN
    FOR i IN 1..19
    LOOP
    SELECT owner INTO l_owner
    FROM t
    WHERE owner='SYS'
    GROUP BY owner;
    END LOOP;
    END;
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.09 0 18 0 0
    Execute 1 0.00 0.00 0 0 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.01 0.09 0 18 0 1
    Misses in library cache during parse: 1
    Optimizer mode: CHOOSE
    Parsing user id: 57 (SCOTT)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 5.50 5.50
    SELECT OWNER
    FROM
    T WHERE OWNER='SYS' GROUP BY OWNER
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 19 0.00 0.00 0 0 0 0
    Fetch 19 0.03 0.03 0 285 0 19
    total 39 0.03 0.03 0 285 0 19
    Misses in library cache during parse: 1
    Optimizer mode: CHOOSE
    Parsing user id: 57 (SCOTT) (recursive depth: 1)
    Rows Row Source Operation
    19 SORT GROUP BY NOSORT (cr=285 pr=0 pw=0 time=30790 us)
    63935 TABLE ACCESS FULL T (cr=285 pr=0 pw=0 time=64246 us)
    Now you can see the IO diffrence.Cant you call this peace of sequel once ,dun know much about front
    end architecture.
    Khurram

  • Performance problems with combobox and datasource

    We have a perfomance problem, when we are connecting a datatable object or something like this to a datasource property of a combobox. Below you find the source code. The SQL-Statement reads about 40000 rows and the result (all 40000) should be listed in the combobox. There is duration about 30 second before this process has finished. Any suggestions?
    Dim ds As New DataSet
    strSQL = "Select * from am.city"
    conn = New Oracle.DataAccess.Client.OracleConnection(Configuration.ConfigurationSettings.AppSettings("conORA"))
    comm = New Oracle.DataAccess.Client.OracleCommand(strSQL)
    da = New Oracle.DataAccess.Client.OracleDataAdapter(strSQL, conn)
    conn.Open()
    da.Fill(ds)
    conn.Close()
    Dim dt As New DataTable
    dt = ds.Tables(0)
    ComboBox1.DataSource = dt
    ComboBox1.ValueMember = dv.Table.Columns("id").ColumnName
    ComboBox1.DisplayMember = dv.Table.Columns("city").ColumnName

    But how long does it take to fill the DataTable?
    I can fill a 40000 row datatable in under 4 seconds.
    DataBinding a combo box to that many rows is pretty expensive, and not normally recommended.
    David
    Dim strConnection As String = "Data Source=oracle;User ID=scott;Password=tiger;"
    Dim conn As OracleConnection = New OracleConnection(strConnection)
    conn.Open()
    Dim cmd As New OracleCommand("select * from (select * from all_objects union all select * from all_objects) where rownum <= 40000", conn)
    Dim ds As New DataSet()
    Dim da As New OracleDataAdapter(cmd)
    Dim begin As Date = Now
    da.Fill(ds)
    Console.WriteLine(ds.Tables(0).Rows.Count & " rows loaded in " & Now.Subtract(begin).TotalSeconds & " seconds")
    outputs
    40000 rows loaded in 3.734375 seconds

  • Performance Problem with File Adapter using FTP Conection

    Hi All,
    I have a pool of 19 interfaces that send data from R/3 using RFC Adpater, and these interfaces generate 30 TXT files in a target Server. I'm using File Adapters as Receiver Comunication Channel. It's generating a serious perfomance problem. In File Adpater I'm using FTP Conection with Permanently Conection, Somebody knows if PERMANENTLY CONECTION is the cause of performances problem ?
    These interfaces will run once a day with total of 600 messages.
    We still using a Test Server with few messages.

    Hi Regis,
        We also faced teh same porblem. Whats happening is that when the FTP session is initiated by the file adapter, then its getting done from teh XI server. Hence the memory of the server is also eaten up. Why dont you give a try by using 'per file transfer'.
        If this folder to which you are connecting is within your XI server network then you can mount(or map) that drive to the XI server and use it with a NFS protocol of the file adapter and thereby increasing the performance.
    Cheers
    JK

  • Perfomance trouble - to much swap - not enough RAM

    hello,
    On an HP RX3600, HP-UX 11.23, with 8gb RAM, 2 CPU itanium. I have my sap Ecc 6.0 installed end of march 2007.
    I have 110/120 users in session.
    I have some performance problem.
    2 weeks ago, my dialog average response time was very good (less than 400ms).
    the Ø FE Network Time (ms) was near 60 0000.
    Now i have poor perfromance :
    Response time near 1000ms and Ø FE Network Time (ms) near 200 000.
    I do not know why this increase, and what mean this "Ø FE Network Time (ms)"
    It seems my perfomance problem are caused by to much swap.
    I have 24 process dialog. The abap/buffersize is 450Mb
    I have saved memory by changing some parameter :
    PGA_AGGREGATE_TARGET -> decrease to 536870912 (500MB)
    DB_CACHE_SIZE -> decrease to 1610612736 (save another 500 MB)
    SHARED_POOL_SIZE -> decrease to 629145600 (600 MB)
    JAVA_POOL_SIZE -> decrease to 1048576 (1MB) - not used
    Before I will buy new 4Gb RAM , have you an idea ?
    Do you know something with an HP-UX bug about bad memory liberation ?
    Thank's for your help
    Laurent

    Hi,
    Finally HP has today concluded that, we need to add additional RAM.
    We are upset with this.
    Mean while we have seen little bit improvement in performance, when we have stopped the Java stack.
    Previously i also done lot of fight with parameters. But no result.
    I hope u have set parameters correctly, no need to do further research on this.
    If u still experienceing very bad performance, there is no other way, u also need to go for invest on memory.
    Have you tried with SAP Memory Analyzer,
    I have just downloaded, i need to install. You can try this,
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/webcontent/uuid/8070021f-5af6-2910-5688-bde8f4fadf31 [original link is broken]
    Regards,
    MSR.
    Note: Points always encourage me to reply !!

  • Ajax4JSF Perfomance Issues

    Me & my team have used Ajax4JSF extensively in our application very recently we have been observing perfomance problems.
    Here is the situation
    We have following dropdown defined with ajax4jsf event onchange
    <h:panelGrid columns="3" styleClass="detail" columnClasses="label">
                     <h:outputText value="Manufacturer" />
                     <h:selectOneMenu id="manufList" value="#{manufacturerBean.selectedManufacturer}" > 
                       <f:selectItem itemLabel="New" itemValue="New" />
                  <f:selectItems value="#{manufacturerBean.manufacturerList}" />  
                  <a4j:support action="#{manufacturerBean.loadManufacturerDetails}" event="onchange" reRender="manufName,manufDescription,manufSource,btnSave,btnDelete" />          
               </h:selectOneMenu>           
                </h:panelGrid> The perfomance issue is whenever we pick a different value from the dropdown list Ajax4JSF onchange event is fired & the action method "loadManufacturerDetails" is invoked in the backing bean. This is perfect & expected behaviour. But prior to invoking onchange action method it invokes f:selectItems value binding method i.e. getManufacturerList in the backing bean which leads to perfomance issues as it goes thru Spring & Hibernate and hits the database & fires the query to get the List of Manufacturers everytime onchange event occurs & we were very much suprised with the wait time to populates other fields in the form
    Any pointers/suggestions will be highly appreciated
    Regards
    Bansi

    Me & my team have used Ajax4JSF extensively in our application very recently we have been observing perfomance problems.
    Here is the situation
    We have following dropdown defined with ajax4jsf event onchange
    <h:panelGrid columns="3" styleClass="detail" columnClasses="label">
                     <h:outputText value="Manufacturer" />
                     <h:selectOneMenu id="manufList" value="#{manufacturerBean.selectedManufacturer}" > 
                       <f:selectItem itemLabel="New" itemValue="New" />
                  <f:selectItems value="#{manufacturerBean.manufacturerList}" />  
                  <a4j:support action="#{manufacturerBean.loadManufacturerDetails}" event="onchange" reRender="manufName,manufDescription,manufSource,btnSave,btnDelete" />          
               </h:selectOneMenu>           
                </h:panelGrid> The perfomance issue is whenever we pick a different value from the dropdown list Ajax4JSF onchange event is fired & the action method "loadManufacturerDetails" is invoked in the backing bean. This is perfect & expected behaviour. But prior to invoking onchange action method it invokes f:selectItems value binding method i.e. getManufacturerList in the backing bean which leads to perfomance issues as it goes thru Spring & Hibernate and hits the database & fires the query to get the List of Manufacturers everytime onchange event occurs & we were very much suprised with the wait time to populates other fields in the form
    Any pointers/suggestions will be highly appreciated
    Regards
    Bansi

  • Restrict system from doing re-confirmation of all line items in SO

    Hello SDNs,
    When we change the quantity of one line item, system is going for re-confirmation of all line items.
    when there are multiple line items, it is having perfomance problem. Now we want to change this to go for re-confirmation of only the changed line item. Please suggest me how to move ahead.
    Regards
    Hari

    Hi,
    what you are willing to implement is not a supported functionality in the current Yard management.
    If this is not yet the case, I would encourage you to open a SAP Customer message so that your question gets checked and analysed at development support.
    BR
    Alain

Maybe you are looking for