Performance of Transfer Statement

Hi,
  I'm working in a MDMP - Unicode Conversion Project. My job primarily is to enable Custom ABAP code for Unicode syntax.
    I have a generic program which takes the names of tables and fields and the destination path from a file and extracts those table fields to the destination share using transfer statement.
   It is found that this program extracts characters of multiple languages to single file. In the original program there is a single select - endselect loop within which there is a transfer statement.
   Now to accomadate different languages in the same file. I'm collating the data belonging to one language in internal tables and writing the content to the file with a different codepage each time opening and appending to the same file. The approach works good but i face performance issues.
   In the modified version the transfer statement is taking more time than what the transfer statement in the original program is taking. I have no clue of this behaviour. Can someone tell why this change in behaviour of the transfer statement?

Hi Vijay,
   Let me explain you in detail.
  if i have a transfer statement like this, it is taking approximately 12 seconds to download nearly 50 MB content.
   select * from (extract_info-table) into <tblwa>
   where (where_clause).
   move <tblwa> to o_string.
   transfer o_string to file.
   endselect.
if i code it this way.
  select * into table <table> from
  (extract_info-table) where (where_clause).
  perform download_file tables <table>
                        using file.
  form download_file tables data_tab
                     using file.
     loop at data_tab.
        transfer data_tab to file.
     endloop.
  endform.
it is taking 10 times more time ~ 120 secs to download the same content.

Similar Messages

  • Transfer State from one WAD to another

    Hello friends.
    I have a trouble using the command TRANSFER STATE in BW 7.0
    I need to trasfer the navigation state of one dropdown box from wad 1 to wad 2.  I already do that, but the problem is that when the second wad is open, the dataproviders that exists in wad are not being filtered by the characteristic of the dropdown box.
    I'm using a variable in the dropdown box. I'm using the same variable in both wads (and in both dropdown boxes) but the dataproviders of wad 2 ar not being filtered when I trasnfer the status.
    How can I get the dataproviders filtered with the value of the dropdown box when transfer state ????
    Thanks in advance.

    Hi,
    the command TRANSFER_STATE will copy the items / data providers  with the same logical to the target web template (if you have choosen to transfer them in the command settings). Meaning, that if you have a data Provider DP_1 in both Web Templates, DP_1 in the target Web Template will have exactly the same state (and therfore also the filter values, as well as the drill down, ... are the same)
    So in fact in your case TRANSFER_STATE should be done (if this is really the correct scenario) for the Data Provider and not for the item, as the Dropdown will always reflect the state of the underlying data provider.
    What you want to achieve sounds more like a RRI (jump to another Web Template while transfering all the filter values, variables, ...)
    Heike

  • Problem saving "space" in ABAP TRANSFER statement

    Hello all,
          I have an internal table that I loop through and write that data to a file using TRANSFER statement. Here is an exmaple of my code:
    data: l_xml_out type string.
    BEGIN OF itab_out OCCURS 0,
                record(some length) TYPE c,
             END OF itab_out,
    Here is an example Itab_out data
    RECORD1:
    " Bon Appetit "
    RECORD2:
    "Managment Co"
    LOOP AT itab_out
                  INTO l_xml_out.
            TRANSFER l_xml_out TO "somefile".
          ENDLOOP.
    When I transfer to file record1 is appended to record2 which i sfine but I am losing space for example I want "Bon Apetite Management Co" but I get "Bon ApetiteManagementCo"! (Space is gone). How do I preserve the space? I did reserach on SDN but could not get my problem resolved. Please provice any feedback if you can.
    Thanks.
    Mithun

    >
    Mithun Dha wrote:
    > Sorry, I might not have provided you all information. I know concatenate but my internaltable has like 20000 records and we are looping through those records into "string" type data variable and transferring to file. The internal table records I gave in my posting was just an example. Thanks for your quick feedback.
    >
    > Mithun.
    So, Can't you concatenate to string type? Yes, you have to concatenate before transfer for 20000 records.
    data: l_text(20) type c,
          l_text1(20) type c,
          l_string type string.
    l_text = 'This is text'.
    l_text1 = 'This is text1'.
    CONCATENATE l_text l_text1 into l_string SEPARATED BY space.
    write the concatenate within the loop and then transfer the string to data set.
    Edited by: Sampath Kumar on Nov 10, 2009 8:04 AM

  • How to improve performance of insert statement

    Hi all,
    How to improve performance of insert statement
    I am inserting 1lac records into table it takes around 20 min..
    Plz help.
    Thanx In Advance.

    I tried :
    SQL> create table test as select * from dba_objects;
    Table created.
    SQL> delete from test;
    3635 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from dba_extents where segment_name='TEST';
    COUNT(*)
    4
    SQL> insert /*+ APPEND */ into test select * from dba_objects;
    3635 rows created.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from dba_extents where segment_name='TEST';
    COUNT(*)
    6
    Cheers, Bhupinder

  • Perform and Form statements

    Hello,
    can anyone give egs of using PERFORM and FORM statement. what do these statements do actually.
    thanks.

    See this sample for PERFORM ... USING...CHANGING
    DATA : c1 TYPE i, c2 TYPE i, res TYPE i.
    c1 = 1.
    c2 = 2.
    <b>PERFORM sum USING c1 c2 CHANGING res.</b>
    WRITE:/ res.
    **& Form sum
    ** text
    form sum using p_c1 p_c2 changing value(p_res).
    p_res = p_c1 + p_c2.
    endform. " sum
    Note the difference between the above and below perform.
    DATA : c1 TYPE i, c2 TYPE i, res TYPE i.
    c1 = 1.
    c2 = 2.
    <b>data: subroutinename(3) VALUE 'SUM'.
    PERFORM (subroutinename) IN PROGRAM Y_JJTEST1 USING c1 c2 CHANGING res</b>.
    WRITE:/ res.
    **& Form sum
    text
    form sum using p_c1 p_c2 changing value(p_res).
    p_res = p_c1 + p_c2.
    endform. " sum
    ANother sample for simple perform
    PERFORM HELP_ME.
    FORM HELP_ME.
    ENDFORM.
    <b>... TABLES itab1 itab2 ...</b>
    TYPES: BEGIN OF ITAB_TYPE,
             TEXT(50),
             NUMBER TYPE I,
           END OF ITAB_TYPE.
    DATA:  ITAB TYPE STANDARD TABLE OF ITAB_TYPE WITH
                     NON-UNIQUE DEFAULT KEY INITIAL SIZE 100,
           BEGIN OF ITAB_LINE,
             TEXT(50),
             NUMBER TYPE I,
           END OF ITAB_LINE,
           STRUC like T005T.
    PERFORM DISPLAY TABLES ITAB
                    USING  STRUC.
    FORM DISPLAY TABLES PAR_ITAB STRUCTURE ITAB_LINE
                 USING  PAR      like      T005T.
      DATA: LOC_COMPARE LIKE PAR_ITAB-TEXT.
      WRITE: / PAR-LAND1, PAR-LANDX.
      LOOP AT PAR_ITAB WHERE TEXT = LOC_COMPARE.
      ENDLOOP.
    ENDFORM.
    Hope this helps.
    Reward points if this helps u.

  • Unable to perform call transfer or call park for an outbound call via SIP Trunk (SKYPE)

    We have configured the SIP Trunk & SIP profile and successfull make outbound call through SIP Trunk (SKYPE). However, we are not able to perform call transfer or call park when the call is connected.
    The scenario is:
    A call to an phone number via SIP trunk, when call established, A perform call-transfer to B. After the call-transfer, the call Drop and Phone B show error code "Temp Fail"        
    When i select "enable MTP" in SIP trunk, we are able to call transfer and call park. But it limit the number of call session to 1.

    You are probably running into some sort of Codec issue.  IE, your phone is G.711 and the trunk is G.729. You will need to transcode the call at somepoint.     

  • Unable to perform call transfer & call park through SIP Trunk (SKYPE)

    The Scenario is:
    I have set up a SIP trunk to SKYPE and we are able to make outbound call to a number via SIP Trunk.
    After the call is established, when we tried to make call transfer, the call DROP and the phone at the other end shows error "Temp Fail".
    I tried to "enable MTP" in SIP Trunk and We are able to perform call-transfer but it limits the call session to 1.
    Anyone has facing the same issue?

    MTP is needed to invoke supplementary functions like hold, transfer etc. Make sure that the MTP is checked on SIP trunk, MTP is assigned to the MRGL of the device pool on SIP trunk and has sufficient resources.
    HTH
    Manish

  • Replace B2BUA with a new one, without loosing transfer states

    I'm having some issues with replacing my B2BUA for transfer purpose.
    I have a link between 2 participant by using a B2BUA. I want to transfer one of these participants into another B2BUA, and that new B2BUA will repaces the previous one. I can accomplish this succesfully without any troubles, by using self-transfer and B2BUA
    utilitize.
    But when the link is established in B2BUA between two participants, when creating a new B2BUA, the participants can still talk untill the new B2BUA is established (this is the issue).
    I doesn't loose the link of participant in Call leg 1, since I'm using a Self-transfer leaving the call state as incoming. But if I call BeginTerminate() of the 1st B2BUA while establishing the 2nd B2BUA, then I'll loose my transfer states of the first Call,
    and I'm using this event state to tell me when a transfer is succesfully done.
    How can I cancel the stream between the two participants in the first B2BUA, as soon as I begin to establish the second B2BUA??

    If either leg of the B2B is terminated, the B2B call is terminated. In a transfer scenario the leg of the participating in the self transfer will end before the transfer itself is completed, leading to an OperationFailureException on the EndTransfer. 
    There is really no way to avoid this, so you should just expect it.
    Here is a snip from one of my test programs:
    private void InitiateSelfTransfer()
    Console.WriteLine("Initiating self-transfer on p2p call...");
    //Copy inCall since we will replace it with the new inCall prior to the completion of the transfer.
    AudioVideoCall p2pOldInCall = (AudioVideoCall)inCall;
    //Pass a reference to this instance in the Context so the new inbound call can reattach to this session.
    p2pOldInCall.ApplicationContext = this;
    p2pOldInCall.TransferStateChanged += new EventHandler<TransferStateChangedEventArgs>(oldInCall_TransferStateChanged);
    CallTransferOptions cto = new CallTransferOptions(CallTransferType.Attended);
    //Execute the transfer.
    Console.WriteLine("Beginning Self-Transfer...");
    try
    p2pOldInCall.BeginTransfer(p2pOldInCall, cto,
    ar =>
    Console.WriteLine("Self-Transfer Completed...");
    try
    p2pOldInCall.EndTransfer(ar);
    Console.WriteLine("Self-Transfer was successful!");
    SendTransferNotification(200, "Succeeded at self-transfer");
    //In this scenario the BackToBackCall seems to always terminate the call before we are made aware the transfer is complete.
    //This operation has to execute as Attended, so we simply catch the exception generated, it doesn't seem to cause us any problems.
    catch (OperationFailureException)
    Console.WriteLine("Exception ignored on self-transfer...");
    SendTransferNotification(487, "Call already terminated...");
    catch (RealTimeException ex)
    SendTransferNotification(503, "Failed on self-transfer of inbound leg");
    Console.WriteLine(ex);
    , null);
    catch (InvalidOperationException iOpEx)
    Console.WriteLine("Invalid Operation Exception: " + iOpEx.ToString());

  • HT1430 I do a hard reset of my iPhone every so often because I feel it improves performance, but Apple states 'Reset ONLY if the device is not responding'. Forget about my OCD on this, but is their any reason my Apple stresses the ONLY DO THIS if not resp

    I do a hard reset of my iPhone every so often because I feel it improves performance, but Apple states 'Reset ONLY if the device is not responding'. Forget about my OCD on this, but is their any reason my Apple stresses the ONLY DO THIS if not responding?

    deggie wrote:
    Because it is more far-reaching than just turning the phone off and back on, it completely takes out anything in volatile memory, etc. I do reset mine if I notice problems such as no email coming in, text message issues, etc. but the same thing could probably be accomplished by just turning the phone off and on in your case.
    Thanks deggie, I think I get it now. Volatile memory (I had to look it up!) is memory that is lost on loss of power, so it's as you say... I'm not achieving anything on a normally functioning iPhone as on-off achieves the same result!... Thanks

  • Performance issue with statement

    This is the same as my other thread but with everything formatted.
    I'm having a lot of issues trying to tune this statement. I have added some new indexes and even moved existing indexes to a 32k tablespace. The execution plan has improved but when I execute the statement the data never returns. I see where my bottle-neck is but I'm lost on what else I can do to improve the performance.
    STATEMENT:
    SELECT DISTINCT c.oprclass, a.business_unit, i.descr, a.zsc_load,
                    b.ship_to_cust_id, b.zsc_load_status, f.ld_cnt,
                    b.zsc_mill_release, b.address_seq_num, d.name1,
                    e.address1 || ' - ' || e.city || ', ' || e.state || '  '
                    || e.postal
               FROM ps_zsc_ld a,
                    ps_zsc_ld_seq b,
                    ps_sec_bu_cls c,
                    ps_customer d,
                    ps_set_cntrl_group g,
                    ps_rec_group_rec r,
                    ps_bus_unit_tbl_fs i,
                    (SELECT   business_unit, zsc_load, COUNT (*) AS ld_cnt
                         FROM ps_zsc_ld_seq
                     GROUP BY business_unit, zsc_load) f,
                    (SELECT *
                       FROM ps_cust_address ca
                      WHERE effdt =
                               (SELECT MAX (effdt)
                                  FROM ps_cust_address ca1
                                 WHERE ca.setid = ca1.setid
                                   AND ca.cust_id = ca1.cust_id
                                   AND ca.address_seq_num = ca1.address_seq_num
                                   AND ca1.effdt <= SYSDATE)) e
              WHERE a.business_unit = b.business_unit
                AND a.zsc_load = b.zsc_load
                AND r.recname = 'CUSTOMER'
                AND g.rec_group_id = r.rec_group_id
                AND g.setcntrlvalue = a.business_unit
                AND d.setid = g.setid
                AND b.ship_to_cust_id = d.cust_id
                AND e.setid = g.setid
                AND b.ship_to_cust_id = e.cust_id
                AND b.address_seq_num = e.address_seq_num
                AND a.business_unit = f.business_unit
                AND a.zsc_load = f.zsc_load
                AND a.business_unit = c.business_unit
                AND a.business_unit = i.business_unit;EXECUTION PLAN:
    Plan
    SELECT STATEMENT  CHOOSECost: 1,052  Bytes: 291  Cardinality: 1                                                              
         25 SORT UNIQUE  Cost: 1,052  Bytes: 291  Cardinality: 1                                                         
              24 SORT GROUP BY  Cost: 1,052  Bytes: 291  Cardinality: 1                                                    
                   23 FILTER                                               
                        19 NESTED LOOPS  Cost: 1,027  Bytes: 291  Cardinality: 1                                          
                             17 NESTED LOOPS  Cost: 1,026  Bytes: 279  Cardinality: 1                                     
                                  15 NESTED LOOPS  Cost: 1,025  Bytes: 263  Cardinality: 1                                
                                       12 NESTED LOOPS  Cost: 1,024  Bytes: 227  Cardinality: 1                           
                                            10 NESTED LOOPS  Cost: 1,023  Bytes: 28,542  Cardinality: 134                      
                                                 7 HASH JOIN  Cost: 60  Bytes: 134,101  Cardinality: 803                 
                                                      5 NESTED LOOPS  Cost: 49  Bytes: 5,175  Cardinality: 45            
                                                           3 NESTED LOOPS  Cost: 48  Bytes: 1,230,725  Cardinality: 12,955       
                                                                1 TABLE ACCESS FULL SYSADM.PS_CUST_ADDRESS Cost: 20  Bytes: 3,465  Cardinality: 45 
                                                                2 INDEX RANGE SCAN UNIQUE SYSADM.TEST3 Cost: 1  Bytes: 5,130  Cardinality: 285 
                                                           4 INDEX UNIQUE SCAN UNIQUE SYSADM.PS_REC_GROUP_REC Bytes: 20  Cardinality: 1       
                                                      6 INDEX FAST FULL SCAN NON-UNIQUE SYSADM.PS0CUSTOMER Cost: 10  Bytes: 252,460  Cardinality: 4,855            
                                                 9 TABLE ACCESS BY INDEX ROWID SYSADM.PS_ZSC_LD_SEQ Cost: 2  Bytes: 46  Cardinality: 1                 
                                                      8 INDEX RANGE SCAN UNIQUE SYSADM.TEST7 Cost: 1  Cardinality: 1            
                                            11 INDEX UNIQUE SCAN UNIQUE SYSADM.PS_ZSC_LD Bytes: 14  Cardinality: 1                      
                                       14 TABLE ACCESS BY INDEX ROWID SYSADM.PS_BUS_UNIT_TBL_FS Cost: 2  Bytes: 36  Cardinality: 1                           
                                            13 INDEX UNIQUE SCAN UNIQUE SYSADM.PS_BUS_UNIT_TBL_FS Cardinality: 1                      
                                  16 INDEX FULL SCAN UNIQUE SYSADM.PS_SEC_BU_CLS Cost: 2  Bytes: 96  Cardinality: 6                                
                             18 INDEX RANGE SCAN UNIQUE SYSADM.PS_ZSC_LD_SEQ Cost: 1  Bytes: 12  Cardinality: 1                                     
                        22 SORT AGGREGATE  Bytes: 31  Cardinality: 1                                          
                             21 FIRST ROW  Cost: 2  Bytes: 31  Cardinality: 1                                     
                                  20 INDEX RANGE SCAN (MIN/MAX) UNIQUE SYSADM.PS_CUST_ADDRESS Cost: 2  Cardinality: 5,364                                TRACE INFO:
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.22       0.24          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1   1208.24    1179.86         92  221319711          0           0
    total        3   1208.46    1180.11         92  221319711          0           0
    Misses in library cache during parse: 1
    Optimizer mode: CHOOSE
    Parsing user id: 81 
    Rows     Row Source Operation
          0  SORT UNIQUE (cr=0 r=0 w=0 time=0 us)
          0   SORT GROUP BY (cr=0 r=0 w=0 time=0 us)
          0    FILTER  (cr=0 r=0 w=0 time=0 us)
          0     NESTED LOOPS  (cr=0 r=0 w=0 time=0 us)
          0      NESTED LOOPS  (cr=0 r=0 w=0 time=0 us)
          0       NESTED LOOPS  (cr=0 r=0 w=0 time=0 us)
          0        NESTED LOOPS  (cr=0 r=0 w=0 time=0 us)
          0         NESTED LOOPS  (cr=0 r=0 w=0 time=0 us)
          0          HASH JOIN  (cr=0 r=0 w=0 time=0 us)
    2717099           NESTED LOOPS  (cr=221319646 r=92 w=0 time=48747178172 us)
    220447566            NESTED LOOPS  (cr=872143 r=92 w=0 time=10965565169 us)
       4590             TABLE ACCESS FULL OBJ#(15335) (cr=99 r=92 w=0 time=58365 us)
    220447566             INDEX RANGE SCAN OBJ#(2684506) (cr=872044 r=0 w=0 time=2533034831 us)(object id 2684506)
    2717099            INDEX UNIQUE SCAN OBJ#(583764) (cr=220447568 r=0 w=0 time=23792811449 us)(object id 583764)
          0           INDEX FAST FULL SCAN OBJ#(15319) (cr=0 r=0 w=0 time=0 us)(object id 15319)
          0          TABLE ACCESS BY INDEX ROWID OBJ#(735431) (cr=0 r=0 w=0 time=0 us)
          0           INDEX RANGE SCAN OBJ#(2684517) (cr=0 r=0 w=0 time=0 us)(object id 2684517)
          0         INDEX UNIQUE SCAN OBJ#(550855) (cr=0 r=0 w=0 time=0 us)(object id 550855)
          0        TABLE ACCESS BY INDEX ROWID OBJ#(11041) (cr=0 r=0 w=0 time=0 us)
          0         INDEX UNIQUE SCAN OBJ#(582984) (cr=0 r=0 w=0 time=0 us)(object id 582984)
          0       INDEX FULL SCAN OBJ#(583859) (cr=0 r=0 w=0 time=0 us)(object id 583859)
          0      INDEX RANGE SCAN OBJ#(2684186) (cr=0 r=0 w=0 time=0 us)(object id 2684186)
          0     SORT AGGREGATE (cr=0 r=0 w=0 time=0 us)
          0      FIRST ROW  (cr=0 r=0 w=0 time=0 us)
          0       INDEX RANGE SCAN (MIN/MAX) OBJ#(15336) (cr=0 r=0 w=0 time=0 us)(object id 15336)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      db file scattered read                         14        0.00          0.00
      direct path write                            3392        0.00          0.06
      db file sequential read                         8        0.00          0.00

    I had an index on that table but still that is not where my bottle neck was showing so I removed it. I have added the index back and clearly it has helped the execution plan.
    PLAN_TABLE_OUTPUT                                                                                                                           
    | Id  | Operation                           |  Name               | Rows  | Bytes | Cost (%CPU)|                                            
    |   0 | SELECT STATEMENT                    |                     |     1 |   291 |  1035   (1)|                                            
    |   1 |  SORT UNIQUE                        |                     |     1 |   291 |  1035   (1)|                                            
    |   2 |   SORT GROUP BY                     |                     |     1 |   291 |  1035   (1)|                                            
    |   3 |    FILTER                           |                     |       |       |            |                                            
    |   4 |     NESTED LOOPS                    |                     |     1 |   291 |  1010   (1)|                                            
    |   5 |      NESTED LOOPS                   |                     |     1 |   279 |  1009   (1)|                                            
    |   6 |       NESTED LOOPS                  |                     |     1 |   243 |  1008   (1)|                                            
    |   7 |        NESTED LOOPS                 |                     |     1 |   227 |  1006   (0)|                                            
    |   8 |         NESTED LOOPS                |                     |   135 | 28755 |  1005   (0)|                                            
    |   9 |          HASH JOIN                  |                     |   805 |   131K|    39   (0)|                                            
    |  10 |           HASH JOIN                 |                     |    45 |  5175 |    28   (0)|                                            
    |  11 |            TABLE ACCESS FULL        | PS_CUST_ADDRESS     |    45 |  3465 |    20   (0)|                                            
    |  12 |            NESTED LOOPS             |                     |  3398 |   126K|     7   (0)|                                            
    |  13 |             INDEX FAST FULL SCAN    | PS_REC_GROUP_REC    |     1 |    20 |     5   (0)|                                            
    |  14 |             INDEX RANGE SCAN        | TEST11              |  3398 | 61164 |     3   (0)|                                            
    |  15 |           INDEX FAST FULL SCAN      | PS0CUSTOMER         |  4855 |   246K|    10   (0)|                                            
    |  16 |          TABLE ACCESS BY INDEX ROWID| PS_ZSC_LD_SEQ       |     1 |    46 |     2   (0)|                                            
    |  17 |           INDEX RANGE SCAN          | PS0ZSC_LD_SEQ       |     1 |       |     1   (0)|                                            
    |  18 |         INDEX UNIQUE SCAN           | PS_ZSC_LD           |     1 |    14 |            |                                            
    |  19 |        INDEX FULL SCAN              | PS_SEC_BU_CLS       |     3 |    48 |     2   (0)|                                            
    |  20 |       TABLE ACCESS BY INDEX ROWID   | PS_BUS_UNIT_TBL_FS  |     1 |    36 |     2  (50)|                                            
    |  21 |        INDEX UNIQUE SCAN            | PS_BUS_UNIT_TBL_FS  |     1 |       |            |                                            
    |  22 |      INDEX RANGE SCAN               | PS_ZSC_LD_SEQ       |     1 |    12 |     1   (0)|                                            
    |  23 |     SORT AGGREGATE                  |                     |     1 |    31 |            |                                            
    |  24 |      FIRST ROW                      |                     |     1 |    31 |     2   (0)|                                            
    |  25 |       INDEX RANGE SCAN (MIN/MAX)    | PS_CUST_ADDRESS     |  5364 |       |     2   (0)|                                            
    ------------------------------------------------------------------------------------------------

  • How to improve Performance of the Statements.

    Hi,
    I am using Oracle 10g. My problem is when i am Execute & fetch the records from the database it is taking so much time. I have created Statistics also but no use. Now what i have to do to improve the Performance of the SELECT, INSERT, UPDATE, DELETE Statements.
    Is it make any differents because i am using WindowsXP, 1 GB RAM in Server Machine, and WindowsXP, 512 GB RAM in Client Machine.
    Pls. Give me advice for me to improve Performance.
    Thank u...!

    What and where to change parameters and values ?Well, maybe my previous post was not clear enough, but if you want to keep your job, you shouldn't change anything else on init parameter and you shouldn't fall in the Compulsive Tuning Disorder.
    Everyone who advise you to change some parameters to some value without any more info shouldn't be listen.
    Nicolas.

  • 9.2.0.6 performs better if stats are deleted

    I have this behaviour with 90% of queries where deleting the schema stats makes it perform the best in some cases improvement of 300%.
    The query was written for 9.2.0.6 and we never had 8i.
    I have modified the optimizer settings in init.ora but never got close to the way rule based performs. What could be causing this?
    I can post the query with 3 scenarios if anyone wants to look.
    Thanks

    Check the explain plans for the queries with and without statistics, that will tell you what's causing the performance difference. You may end up having to re-work some of the queries if you plan on switching to the cost based optimizer. We have applications that have never used statistics, generating them degraded performance and they were deleted. Application area decided they did not want to spend the time to rework the queries, I guess they'll do it when rule based goes away.
    Good luck.

  • Problem in transfer statement.

    Hi folks,
    I want to place the file in the application server using open dataset.
    My final output internal table gt_output fields are
                   value, kndnr, perio, skunumber,perio.
    i used the statement like
    data: gv_string(120) type c.
    gv_string = p_file.
       open dataset p_file for output in text mode encoding default.
             loop at gt_output into wa_output.
               transfer gv_string to p_file.
                endloop.
    In the final internal table, the amount field is the currency data type.
    If i executed this one i got the problem is wa_output-amount must be a char type data object(C, N, D or String).
    Can u plz give me the suggestion ASAP.
    regards
    Nag

    Hi Hemam,
    I faced similar kind of problem long time back. the solution is quiet simple.
    the currency field can be formatted  by.
    write wa_output-amount to gv_amount currency 'AUS' ROUND 3 DECIMALS 1.
    Regards,

  • Why is my transfer statement no working for a string?

    I have a a string field that I create via call transformation.
    I now want to download that string to my PC.
    I can not use the GUI_DOWNLOAD because it demands a table to download - I have a string field so i can not use a table to download.
    So i am tring to use the tranfer statement.
    It returns sy-subrc = 0 but nothing is downloaded.
    Here is the sample code.
    data: lv_result_xml. type string.
    Call to XSLT Transformation
      TRY.
          CALL TRANSFORMATION zrcs_xfer_wrty_xslt
                SOURCE batchtime = filetime
                       saletab   = saletab
                RESULT XML lv_result_xml.
        CATCH cx_xslt_exception INTO xslt_error.
          xslt_message = xslt_error->get_text( ).
          RAISE xslt_error.
      ENDTRY.
    types: begin of ty_xml_line,
    data(256) type x,
    *end of ty_xml_line.
    *data: wa_xmlrec type ty_xml_line.
      OPEN DATASET zfile FOR OUTPUT IN BINARY MODE.
      TRANSFER lv_result_xml TO zfile. "LENGTH w_size.
    *loop at xml_tab into wa_xmlrec.
    *v_strlen = strlen( wa_xmlrec ).
    *transfer wa_xmlrec to v_serfile length v_strlen1.
    *endloop.
      CLOSE DATASET zfile.

    Looking at your other posts I don't know whether this is still an open issue. One thing you should note however: OPEN DATASET - TRANSFER  - CLOSE DATASET creates files on the application server, not on your workstation.
    Rgds,
    Mark

  • Will Materialized view log reduces the performance of DML statements on the master table

    Hi all,
    I need to refresh a on demand fast refresh Materialized view in Oracle 11GR2. For this purpose I created a Materialized view log on the table (Non partitioned) in which records will be inserted @ rate of 5000/day as follows.
    CREATE MATERIALIZED VIEW LOG ON NOTES NOLOGGING WITH PRIMARY KEY INCLUDING NEW VALUES;
    This table already has 20L records and adding this Mview log will reduce the DML performance on the table ?
    Please guide me on this.

    Having the base table maintain a materialised view log will have an impact on the speed of DML statements - they are doing extra work, which will take extra time. A more sensible question would be to ask whether it will have a significant impact, to which the answer is almost certainly "no".
    5000 records inserted a day is nothing. Adding a view log to the heap really shouldn't cause any trouble at all - but ultimately only your own testing can establish that.

Maybe you are looking for