5 tables Used - Performance Improvement Help Req...

Hello Experts,
I am using 5 table joins to fetch data from query; does this will degrade performance?
Please find below the query used; in this WHERE Clause, 1) le.locker_entry_id is PRIMARY Key column
2) dc.object_type - created a index on this column 3) Created a function based index on this column upper(dt.partner_device_type)
Can any other code improvement be done or view be created ?
Please suggest.
SELECT count(*) FROM locker_entry le;
COUNT(*)              
1762                  
SELECT count(*) FROM digital_compatibility dc;
COUNT(*)              
227757    
SELECT count(*) FROM digital_encode_profile dep;
COUNT(*)              
48                    
SELECT count(*) FROM device_type dt;
COUNT(*)              
421                   
SELECT count(*) FROM digital_sku dsku;
COUNT(*)              
26037     
EXPLAIN PLAN FOR
SELECT
  /*+ INDEX(dep, DIGITAL_ENCODE_PROFILE_ID_PK) */
  DISTINCT le.locker_entry_id AS locker_entry_id,
  dep.delivery_type
FROM locker_entry le,
  digital_compatibility dc,
  digital_encode_profile dep,
  device_type dt,
  digital_sku dsku
WHERE le.sku_id                   = dsku.legacy_id
AND le.digital_package_id         = dsku.digital_package_id
AND dsku.digital_sku_id           = dc.object_id
AND dc.encode_profile_id          =dep.digital_encode_profile_id
AND dt.capability_set_id          =dc.capability_set_id
AND le.locker_entry_id           IN (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32)
AND dc.object_type                =:"SYS_B_0"
AND upper(dt.partner_device_type) =:33;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
| Id  | Operation                             | Name                         | Rows  | Bytes | Cost (%CPU)|
|   0 | SELECT STATEMENT                      |                              |     1 |   370 |   481   (3)|
|   1 |  HASH UNIQUE                          |                              |     1 |   370 |   481   (3)|
|   2 |   NESTED LOOPS                        |                              |     1 |   370 |   480   (3)|
|   3 |    NESTED LOOPS                       |                              |     1 |   336 |   479   (3)|
|*  4 |     HASH JOIN                         |                              |     1 |   193 |   478   (3)|
|   5 |      MAT_VIEW ACCESS BY INDEX ROWID   | DIGITAL_SKU                  |     1 |    48 |     5   (0)|
|   6 |       NESTED LOOPS                    |                              |    16 |  1392 |     5   (0)|
|   7 |        INLIST ITERATOR                |                              |       |       |            |
|   8 |         TABLE ACCESS BY INDEX ROWID   | LOCKER_ENTRY                 |    32 |  1248 |     1   (0)|
|*  9 |          INDEX RANGE SCAN             | LOCKER_ENTRY_ID_PK           |    32 |       |     1   (0)|
|  10 |        BITMAP CONVERSION TO ROWIDS    |                              |       |       |            |
|  11 |         BITMAP AND                    |                              |       |       |            |
|  12 |          BITMAP CONVERSION FROM ROWIDS|                              |       |       |            |
|* 13 |           INDEX RANGE SCAN            | IDX_DIGITAL_SKU_LEGACY_ID    |     1 |       |     1   (0)|
|  14 |          BITMAP CONVERSION FROM ROWIDS|                              |       |       |            |
|* 15 |           INDEX RANGE SCAN            | IDX_DIGITAL_PACKAGE_ID       |     1 |       |     1   (0)|
|* 16 |      MAT_VIEW ACCESS FULL             | DIGITAL_COMPATIBILITY        |  2098 |   217K|   472   (3)|
|* 17 |     INDEX RANGE SCAN                  | DEVICE_TYPE_IDX              |     1 |   143 |     1   (0)|
|  18 |    MAT_VIEW ACCESS BY INDEX ROWID     | DIGITAL_ENCODE_PROFILE       |     1 |    34 |     1   (0)|
|* 19 |     INDEX UNIQUE SCAN                 | DIGITAL_ENCODE_PROFILE_ID_PK |     1 |       |     1   (0)|
Predicate Information (identified by operation id):
   4 - access("DSKU"."DIGITAL_SKU_ID"=TO_NUMBER("DC"."OBJECT_ID"))
   9 - access("LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:1) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:2) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:3) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:4) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:5) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:6) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:7) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:8) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:9) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:10) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:11) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:12) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:13) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:14) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:15) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:16) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:17) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:18) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:19) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:20) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:21) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:22) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:23) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:24) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:25) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:26) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:27) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:28) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:29) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:30) OR
              "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:31) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:32))
  13 - access("LE"."SKU_ID"="DSKU"."LEGACY_ID")
  15 - access("LE"."DIGITAL_PACKAGE_ID"="DSKU"."DIGITAL_PACKAGE_ID")
  16 - filter("DC"."OBJECT_TYPE"=:SYS_B_0)
  17 - access("DT"."CAPABILITY_SET_ID"="DC"."CAPABILITY_SET_ID" AND
              UPPER("PARTNER_DEVICE_TYPE")=:33)
  19 - access("DC"."ENCODE_PROFILE_ID"="DEP"."DIGITAL_ENCODE_PROFILE_ID")
Note
   - 'PLAN_TABLE' is old version
Trace information
=================  
recursive calls     17
db block gets     0
consistent gets     239
physical reads     61
redo size     0
bytes sent via SQL*Net to client     742
bytes received via SQL*Net from client     1361
SQL*Net roundtrips to/from client     2
sorts (memory)     5
sorts (disk)     0
   Edited by: Linus on Oct 10, 2011 2:44 AM

Linus wrote:
Yes Bravid i got regarding the point to remove the hint from the query; unless there is a change in execution plan with index hint.
My concern is; is there any issues by keeping 5 tables on single query and will this degrade performance?
Because as of during the certification practise; i used to overcome some where ,"NOT to use many table joins".
So that is reason i am asking you; sorry if am wrong in any ways.
Thanks...There's nothing inherently wrong with joining lots of tables and there isn't one specific thing that will degrade performance. You could have a query that joins 2 tables performing very badly and a query with 10 tables that performs very well. A lot of it is down to the quality of the decisions the optimiser can make. To get the best decisions out of it you need to make sure it had enough information to work with. So that means making sure stats are available and relevant for the data you're querying and removing any hints unless you have a specific reason to use them.
Earlier this year I had the "joy" of working with a query that had 65 tables in it. It was an INSERT INTO...SELECT FROM and it was a clear example of where a single statement really should have been about 6 or 7 separate statements. The reason in this case was because there were 6 or 7 different sections to the query that had essentially no relationship with each other and they were all outer joined and used lots of analytic functions and case statements to categorise each row and populate the same columns with different values depending on which query block it had originated from.
This is an extreme example but the point is you have to look at the statement and decide whether it does it's job well. My personal preference is to try to avoid big "generic" SQL statements that cater for lots of different scenarios because they can easily become over complicated and difficult to maintain and tune.
HTH
David
Edited by: Bravid on Oct 10, 2011 1:53 PM

Similar Messages

  • What are all the tables used? experts help me

    hi
    what r all the tables used for this report.
    <b>Delivery Performance Report - to compare the customer delivery schedules with the deliveries created in SAP to determine whether they are early, on-time, or late. This measurement is done by comparing the customer request date against the goods issue date from the delivery</b>
    please any one help me
    thanks in advance

    hi.
    If you know standard transaction or program for purchase order or for any function . you can know the affected tables by that transaction or program USING TRANSACTION
    'SE49'.
    I am working on 4.6c it does exist in this version.
    if does not exist on ur version
    then other way to find affected tables by  a transaction or program is--
    first find the development class for that transaction (like Va01) then goto se80 and enter that development class .it will show all the tables affected .
    steps to find Development class for example (Va01).
    press F1 to any field then click on tech. info
    (ii) double click on program name(SAPMV45B).
    (iii) click on object directory entry in GOTO menu.
    (iv) from here u will get Development class (VA).
    now in se80 u will get all the databasec tables under the node object entry.
    hope this will help.
    Regards .
    vikas

  • Can't we use Binary SEARCH  for TYPE SORTED TABLE?(Performance Improvement)

    Hi Expert!
                       I have declare a sorted type table with NON -UNIQUE, and want to use Binary search in read statement. But while  using bunary search in read statement I'm facing an error. The ERROR is
    "Table LI_MARC is a SORTED TABLE or INDEX TABLE. The BINARY SEARCH
    addition is only allowed for these tables if the key specified is an
    initial part of the table key."
    Please find detail
    TYES: tt_marc  TYPE SORTED TABLE OF marc   WITH NON-UNIQUE KEY matnr,werks.
    DATA: li_marc type tt_marc.
    READ TABLE li_marc INTO marc WITH KEY matnr = i_mbew-matnr     
                                                                          werks = i_mbew-bwkey BINARY SEARCH . 
    To my understanding , there is no need to mention Bianry Search  for sorted table TYPE. Please  let me know can  i use ?

    Hello,
    there is no need to mention Bianry Search for sorted table TYPE.
    Yes, this is because for SORTED TABLEs binary search algorithm is used by default for READ TABLE. Although you can use BINARY SEARCH addition but it's use is redundant.
    As for your case you've defined the KEY fields incorrectly There shouldn't be any "comma(s)" between the fields.
    TYPES: tt_marc TYPE SORTED TABLE OF marc WITH NON-UNIQUE KEY matnr werks.
    When you define it with commas
    TYPES: tt_marc TYPE SORTED TABLE OF marc WITH NON-UNIQUE KEY matnr, werks.
    the result is something like this:
    TYPES: tt_marc TYPE SORTED TABLE OF marc WITH NON-UNIQUE KEY matnr.
    TYPES: werks.
    Hence you were getting the syntax error!
    BR,
    Suhas
    PS: As for MARC you can use UNIQUE KEY addition because MATNR & WERKS are the key fields in the table.

  • Indexing of table LTAP - Performance of LT27 improved / but LT22 decreased

    Hi all,
    We had a performance issue with LT27 , so to overcome it we created indexing for the related fields in table LTAP.
    The performance improved a lot. But recently we received a complaint from client that the performance of LT22 has reduced a lot . For both the programs the table used is LTAP,
    The indexes are totally different as for LT22 using fields  LGNUM, VLTYP, VLPLA and PQUIT  and for LT27 the field used is is VLENR. The LT22 program uses LDB -  T3L .
    My question is - will the performance of the other associated program which uses Table LTAP reduces due to creation of new index for LT27, EVENTHO
    UGH  we are using different fields.
    Please guide.
    Thanks,
    Venbgal Rao.
    Moderator message - Moved to the correct forum
    Edited by: Rob Burbank on Nov 18, 2009 9:38 AM

    Hi,
    The program used is LDB ( SAPDBT3L )  and the code is
    A)   SELECT (LTAP_FIELDS-FIELDS) FROM LTAP
              INTO CORRESPONDING FIELDS OF TABLE ILTAP PACKAGE SIZE MAX_LTAP
                WHERE LGNUM = T3_LGNUM
                AND ( NLTYP IN INLTYP AND NLPLA IN INLPLA )
                AND PQUIT IN PQUIT
                AND (LTAP_SEL-WHERE_TAB).
              SORT ILTAP.                      "n_586127
              PERFORM ILTAP_AUSWERTEN.
            ENDSELECT.
    B)   The secondary index details:
    The table used is LTAP and the details of the fields in SECONDARY INDEX in the table is given in below sequence.
    1)  LGNUM
    2)  NLTYP
    3)  NLPLA
    4)  PQUIT
    Hope this information will help u to suggest me a solution.
    Thanks,
    Vengal Rao

  • What are the main tables using in vendor performance report.

    what are the main tables using in vendor performance report.
    how many select statements are in ABAP reports.

    Refer the links -
    vendor performance report !!!
    vendor performance report
    vendor performance report
    I need standard vendor performance report
    Regards,
    Amit
    Reward all helpful replies.

  • Pls help me to modify the query for performance improvement

    Hi,
    I have the below initialization
    DECLARE @Active bit =1 ;
    Declare @id int
    SELECT @Active=CASE WHEN id=@id and [Rank] ='Good' then 0 else 1 END  FROM dbo.Students
    I have to change this query in such a way that the conditions id=@id and [Rank] ='Good' should go to the where condition of the query. In that case, how can i use Case statement to retrieve 1 or 0? Can you please help me to modify this initialization?

    I dont understand your query...May be below? or provide us sample data and your output...
    SELECT *  FROM dbo.students
    where @Active=CASE
    WHEN id=@id and rank ='Good' then 0 else 1 END
    But, I doubt you will have performance improvement here?
    Do you have index on id?
    If you are looking for getting the data for @ID with rank ='Good' then use the below:Make sure, you have index on id,rank combination.
    SELECT *  FROM dbo.students
    where  id=@id
    and rank ='Good' 

  • Performance issue with the table use vrkpa

    Hi.
    here is the selection criteria that i am using and the table use vrkpa i only used to map the table kna1 and vbrk.vbrk and kna1 doesnot have the direct primary key relationship.
    please check and let me know wht this vrkpa is taking time and how can i improve the performance as from kna1,i am fetching data very easily while fetching nothing from vrkpa and fetching fkdat from vbrk.
    the idea behind using these tables is just for one kunnr (from kna1)getting the relevant entries based on the fkdat(selection screen input field),please suggest.
        SELECT kunnr
               name1
               land1
               regio
               ktokd
               FROM kna1
               INTO TABLE it_kna1
               FOR ALL ENTRIES IN it_knb1
               WHERE kunnr = it_knb1-kunnr
               AND ktokd = '0003'.
        IF sy-subrc = 0.
          SORT it_kna1 BY kunnr.
          DELETE ADJACENT DUPLICATES FROM it_kna1 COMPARING kunnr.
        ENDIF.
      ENDIF.
      IF NOT it_kna1[] IS INITIAL.
        SELECT kunnr
               vbeln
               FROM vrkpa
               INTO TABLE it_vrkpa
               FOR ALL ENTRIES IN it_kna1
               WHERE kunnr = it_kna1-kunnr.
        IF sy-subrc = 0.
          SORT it_vrkpa BY kunnr vbeln.
        ENDIF.
      ENDIF.
      IF NOT it_vrkpa[] IS INITIAL.
        SELECT vbeln
               kunrg
               fkdat
              kkber
               bukrs
               FROM vbrk
               INTO TABLE it_vbrk
               FOR ALL ENTRIES IN it_vrkpa
               WHERE vbeln = it_vrkpa-vbeln.
        IF sy-subrc = 0.
          DELETE it_vbrk WHERE fkdat NOT IN s_indate.
          DELETE it_vbrk WHERE fkdat NOT IN s_chdate.
          DELETE it_vbrk WHERE bukrs NOT IN s_ccode.
          SORT it_vbrk DESCENDING BY vbeln fkdat.
        ENDIF.
      ENDIF.

    Hi,
    Transaction SE11
    Table VRKPA => Display (not Change)
    Click on "Indexes"
    Click on "Create" (if your system is Basis 7.00, then click on the "Create" drop-down icon and choose "Create extension index")
    Choose a name (up to 3 characterss, start with Z)
    Enter a description for the index
    Enter the field names of the index
    Choose "Save" (prompts for transport request)
    Choose "Activate"
    If after "Activate' the status shows "Index exists in database system <...>", then you have nothing more to dotable is very large the activation will not create the index in the database and the status remains "Index does nor exist". In that case:
    - Transaction SE14
    - Table VRKPA -> Edit
    - Choose "Indexes" and select your new index
    - Choose "Create database index"; mark the option "Background"
    - Wait until the job is finished and check in SE11 that the index now exists in the DB
    You don't have to do anyhting to your program because Oracle should choose the new index automatically. Run a SQL Trace to make sure.
    Rgds,
    Mark

  • Performance improve using TEZ/HIVE

    Hi,
    I’m newbie in HDInsight. Sorry for asking simple Questions. I have queries around performance improvement of my HIVE query on File data of 90 GB (15 GB * 6).
    We have enabled execution engine has TEZ, I heard the AVRO format improves the speed of execution, Is AVRO SERDE enabled TEZ Queries or do I need upload *.jar files to WASB. I’m using latest version. Any sample Query.
    In TEZ, Will ORC Column Format and Avro compression can work together, when we set ORC compression level on hive has
    Snappy and LZO ?. Is there any Limitation of Number of columns for ORC tables.
    Is there any best compression technique to upload data file to Blob, I mean compress and upload.  I used *.gz, which compressed by 1/4<sup>th</sup> of File Size and upload to Blob, but problem *.gz is not split able and it will always
    uses less (single ) Mapper or should I use Avro with Snappy Compression . Is the Microsoft Avro Library performs snappy Compression or is there any compress which can be  split and compress.
    If data structure for file change over time , will there be necessity of reloading older data?. Can existing query works without change in code.
    It has been said that TEZ has Real Time Reporting capability , but when I Query 90 GB file (It includes Group By, order by clauses) is taking almost 8 mins of time on 20 nodes, are there any pointers to improve performance further and get the Query result
    in Seconds.
    Mahender

    -- Tez is an execution engine, I don't think you need any additional jar file to get AVRO Serde working on Hive when Tez is used.  You can used  AvroSerDe, AvroContainerInputFormat & AvroContainerOutputFormat to get AVRO working when tez is
    used.
    -- I tried creating a table with about 220 columns, although the table was empty, I was able to query from the table, how many columns does your table hold?
    CREATE EXTERNAL TABLE LargColumnTable02(t1 string,.... t220 string)
    PARTITIONED BY(EventDate string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS ORC LOCATION '/data'
    tblproperties("orc.compress"="SNAPPY");
    --  You can refer
    http://dennyglee.com/2013/03/12/using-avro-with-hdinsight-on-azure-at-343-industries/   
    Getting Avro data into Azure Blob Storage Section
    -- It depends on what data has change , if you are using Hadoop, HBase etc..
    -- You will have to monitor your application check node manager logs if there is any pause in execution again. It depends on what you are doing, would suggest open a Support case to investigate further.

  • Altering table degrade performance - Help needed?

    Hi,
    I have a table called test_case which has 60 million records.
    I got a requirement to add new column with not null constraint and default value as 0.
    When I try to add new column with the above said constraints the schema modification taking 40 minutes.
    Alter table test_case add tax number(5) not null default 0
    Is there any workaround to modiy the schema with less minutes?
    I would appreciate if anybody help on this.
    Regards
    SP

    You wanted a faster approach that’s why I suggested this approach.  To my understanding there isn't any other method faster than CTAS method.
    Something like this you need to do  ( If you are in a position to apply the changes ) –
    1.     Create a Tables using CTAS method I mentioned
    CREATE TABLE New_Table ( col1,
                                            Col2,
                                             tax number(5)  Not Null
    As Select col1
                    ,col2
                    ,0
    From  Original_Table;
    2.     Disable the constraints ( Original Table )
    3.     Rename the Original_Table to Original_Table_Bak
    4.     Rename the  New_Table  to the Original_Table
    5.     Enable back the constraints
    6.     Drop the Original_Table_Bak

  • For Statement on a table using add.instances - need help

    Hi,
    I made this table in an acrobat form.   The table uses java script to add rows .   The code I used is:
    _detail.addInstance(1);
    There are two fields on this table that I want to evaluate 1)  an index named index1    
    and 2) the value named numAmount
    The script I am using to evaluate these to fields is as follows:
    if (form1.Page1.Subform1.Capital1.Capital1.detail.index1.rawValue ==2)
        then
        $.rawValue = form1.Page1.Subform1.Capital1.Capital1.detail.index1.numAmount.rawValue 
        endif
    My problem is that I need to nest this if statement inside some kind of a loop.    I want the program to look at my table, determine how many instances the user has added,  scroll through the table and look at index1,  if index 1 = 2 then i want it to return the value in numAmount.
    I just can't get the syntax to work in either form calc or javascript.
    I am trying to write this code using the calculate method on a fourth field located on form1.#subform[2].Table5.Row5.NumericField1
    I have attached the file if it helps.   The table is on page one of the form and the display field is on page three. 
    Any help would be much appreciated!!!

    This is an Acrobat Forms JavaScripting forum, you might find more help in one of the LiveCycle Designer forums.

  • Help Required :Excel Upload Into Oracle Table Using PLSQL Procedure/Package

    Please Help , Urgent Help Needed.
    Requirement is to Upload Excel file Into Oracle Table Using PLSQL Procedure/Package.
    Case's are :
    1. Excel File is On Users/ Client PC.
    2. Application is on Remote Server(Oracle Forms D2k).
    3. User Is Using Application Using Terminal Server LogIn.
    4. So If User Will Use to GET_FILE_NAME() function of D2K to Get Excel File , D2k Will Try to pick File from That Remote Server(Bcs User Logind from Terminal Server Option).
    5. Cannot Use Util_File Package Or Oracle Directory to Place That File on Server.
    6. we are Using Oracle 8.7
    So Need Some PL/SQL Package or Fuction/ Procedure to Upload Excel file on User's Pc to Oracle Table.
    Please Guide me wd some Code. or with Some Pl/SQL Package, or With SOme Hint. Or any Link ....
    Jus help to Sort This Issue ........
    you can also write me on :
    [email protected], [email protected]

    I also Tried to Use This
    But How can i Use SQLLDR Command In Stored Procedure.
    Well IN SQL*PlUS it is successfull but in Stored Procedure /Package ,PL/SQL does not recognise the OS commands.
    So now my Question How can I recognise the SQLLDR Commnad in Stored Procedure.

  • Help Required:How Upload Excel file Into Oracle Table Using PLSQL Procedure

    Please Help , Urgent Help Needed.
    Requirement is to Upload Excel file Into Oracle Table Using PLSQL Procedure/Package.
    Case's are :
    1. Excel File is On Users/ Client PC.
    2. Application is on Remote Server(Oracle Forms D2k).
    3. User Is Using Application Using Terminal Server LogIn.
    4. So If User Will Use to GET_FILE_NAME() function of D2K to Get Excel File , D2k Will Try to pick File from That Remote Server(Bcs User Logind from Terminal Server Option).
    5. Cannot Use Util_File Package Or Oracle Directory to Place That File on Server.
    6. we are Using Oracle 8.7
    So Need Some PL/SQL Package or Fuction/ Procedure to Upload Excel file on User's Pc to Oracle Table.
    Please Guide me wd some Code. or with Some Pl/SQL Package, or With SOme Hint. Or any Link ....
    Jus help to Sort This Issue ........
    you can also write me on :
    [email protected], [email protected]

    TEXT_IO is a PL/SQL package available only in Forms (you'll want to post in the Forms forum for more information). It is not available in a stored procedure in the database (where the equivalent package is UTL_FILE).
    If the Terminal Server machine and the database machine do not have access to the file system on the client machine, no application running on either machine will have access to the file. Barring exceptional setups (like the FTP server on the client machine), your applications are not going to have more access to the client machine than the operating system does.
    If you map the client drives from the Terminal Server box, there is the potential for your Forms application to access those files. If you want the files to be accessible to a stored procedure in the database, you'll need to move the files somewhere the database can access them.
    Justin

  • Can I pass a table using field-symbols to a PERFORM

    Can I pass an internal table using a field-symbol via a PERFORM that is stored in another program.
    For example, I want to pass lt_data using a field-symbol.  If I can do this, please tell me how to define a field-symbol for a table and how I setup the parameters on the FORM.
       perform TEST_FIELD_SYMBOLS in program zadd_data
                      changing lt_data[]
                       if found.
    Thanks.
    Regards,
    Ryan

    Since in ABAP all FORM-paramters are passe call-by reference, it makes imho no difference if u pass a table directly or via a fieldsymbol.
    U can pass a REF TO DATA to a form and then assign it to an FS like shown in the following example.
    TYPES: BEGIN OF s_data,
             data TYPE c,
           END OF s_data,
           s_tab TYPE STANDARD TABLE OF s_data.
    TYPES: r_tab TYPE REF TO data.
    START-OF-SELECTION.
      DATA: t_foo TYPE s_tab.
      DATA: ref_foo TYPE r_tab.
      GET REFERENCE OF t_foo INTO ref_foo.
      PERFORM my_form_fs USING ref_foo.
    FORM my_form_fs USING u_ref TYPE r_tab.
      FIELD-SYMBOLS: <fs> TYPE s_tab.
      DATA: w_tab TYPE s_data.
      ASSIGN u_ref->* TO <fs>.
      w_tab-data = 'X'.
      APPEND w_tab TO <fs>.
    ENDFORM.
    This also works for external performs....
    Best regards,
        Sebastian
    Message was edited by:
            Sebastian Rötzel

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • Help on Updating master table using SAP NetWeaver 2004s

    Dear BSP ians,
    I need to edit,modify multiple records and has to be updated in the master table using MVC. We are using SAP NetWeaver 2004s.
    We have gone through the sample codings which is present in the forum already, and we have worked out using Pages with Flow Logic on ECC 6.0 and those are not getting exexcuted for "SAP NetWeaver 2004s".
    Help me in this issue.
    Points will be rewarded for helpful infos.
    Gokul.N

    My issue is, the sample codes present alreaady in the forums works in ECC 6.0
    But we are using  "SAP Netweaver 2004s"

Maybe you are looking for