Backup of a set of tables

Hi All,
We need to dump a set of tables from BACKUP schema to a dump file so that it can be taken offline from the system.
We want to drop those tables once dump file is created so that we can have some free space on the system.
I just want to know can we export tables based on the table creation date for example all those tables that were created before 2008. This might include approx 300 tables. We don't want to export the entire schema but just a part of it.
Any inputs on this would be highly appreciated.
Regards
Sam

user13271724 wrote:
Hi All,
We need to dump a set of tables from BACKUP schema to a dump file so that it can be taken offline from the system.
We want to drop those tables once dump file is created so that we can have some free space on the system.
I just want to know can we export tables based on the table creation date for example all those tables that were created before 2008. This might include approx 300 tables. We don't want to export the entire schema but just a part of it.
Any inputs on this would be highly appreciated.
Regards
SamDear Sam
Why you don't want to buy new hard drives? Hard drive is cheap
AFAIK there's no a parameter which might be used to export only the tables that were created in the specific time
Just get the creation date from data dictionary view and pass table names to the EXPDP using TABLES parameter
My Oracle Video Tutorials - http://kamranagayev.wordpress.com/oracle-video-tutorials/

Similar Messages

  • LO Set up tables deleted while upgrading the source sys from 4.6c to ECC6

    As a part of ECC6.0 upgrade tasks, our BASIS team deleted all the LO set up tables in the source system(4.6C) and . After the upgrade, we are not finding any queues in RSA7 or in SMQ1. But, we have a lot of  Delta's and few full loads are running on BW (3.1) side for these LO's.
    1. What are the steps we need to take inorder to enable/ work the existing delta's and Full Loads in BW(3.1).  With the new ECC6.0, we don't have plan to enhance any data sources. So, do we need to set up / fill up the tables again for LO's? If yes, Since we have huge data in source system , it will take a lot of time to fill the tables,,,,Is there any option to fill the tables only for the specific period and enable deltas?
    Please help urgently interms of providing step by step guidelines.
    2. We also have the following DS (Delta enabled) with the existing source system. Please advise how to make to work for these delta's once after the source system upgrade.
    0PM_OM_OPA_2
    0PM_OM_OPA_1
    0FI_AR_4
    COPA DATA SOURCE
    Advance Thanks.

    BW USA,
    I think the options are given clearly. You will have to reload the data and I hope you have the DSO(Dataware House Layer) used in your BW side before the data is going to the Cube, otherwise you would have to do a slective delete and load that data to avoid duplication.
    There is one other way, but I have never tried it. I am sure your Basis team would have taken a backup of the system before upgrade. Try to find whether you can restore the delta queue (advice from SAP would be helpful as there are many control tables need to be in sync). If this option is not possible then you would have to do what is already given.
    Any 0FI_* data sources, you can go by the last available period and you can reinitialize without data transfer and load the missing whole period, as these are after image data you would have a DSO inbetween and you should not have any problems.
    For CO-PA whether it is Account or Cost based you can easily re-establish the delete and covere the missing by fiscal period load with repair.
    The steps for FI & CO are below.
    1. Reload missing data by period with FULL LOAD WITH REPAIR option.
    2. Re-init with conditions (if you already had any) without data transfer.
    I think this should fix your issues. I have not used the other two data sources.
    Goodluck,
    Alex(Arthur Samson)

  • How to set the table input in Query template?

    Hi all.
    I need to call a Bapi_objcl_change, with import parameter and a table as an input. I have done this, in BLS. I have set the table input in the
    form of xml. In BLS, I get the output(the value gets change in SAP R3, what i have given in BLS).  But if i set the same xml structure  in
    query template, I didn't get the output. Table input parameter does not take that xml source.  How to set the table input in Query template?
    can anyone help me?
    Regards,
    Hemalatha

    Hema,
    You probably need to XML encode the data so that it will pass properly and then xmldecode() it to set the BAPI input value.
    Sam

  • How can i enter multiple sales order numbers to set up table

    Hi,
    I need to reload about 8,000 sales orders to the set up table and the SO numbers are very scattered, so i can not enter them as a range. In transaction OLI7BW Sales Document field has no multiple selection option, therefore i have to enter the documents ONE BY ONE! Could you please tell me if there any faster way to enter them?
    regards,
    Tansu Aksu @ CHEP

    Try this link..
    http://www.scmexpertonline.com/downloads/SCM_LSMW_StepsOnWeb.doc
    Regards.
    PS: if it help you, pls assign points.

  • Extracting similar data from a dynamic set of tables

    I want to select common fields from an arbitrary set of tables, adding an extra field that is the table name, and return all of these rows in a single table. I know the steps but I am getting bogged down in the details (being new to Oracle):
    --Create a temporary table.
    --Generate a dynamic SQL statement to grab the data for each table of interest.
    --Execute that query and store the results in the temporary table.
    --Output the entire temporary table and delete it.
    Here is my latest attempt:
    ================
    Create global temporary table myTempTable
         Title varchar2(30),
         IdType number(8),
         Count number(8)
    ) on commit delete rows;
    declare
    sql_stmt VARCHAR2(200);
    tname VARCHAR2(30);
    begin
    for tname in (
    select distinct table_name from sys.ALL_TAB_COLS
              where owner='me' and table_name like '%ASSIGNS'
    loop
    sql_stmt :=
    'insert into myTempTable(Title, IdType, Count) ' ||
         'select '':1'', IdType, count(*) from me.:1 group by IdType';
    EXECUTE IMMEDIATE sql_stmt USING tname;
    end loop;
    select * from myTempTable;
    commit; -- to delete the temp table
    end;
    ================
    The error messages are:
    --expressions have to be of SQL types
    --an INTO clause is expected in this SELECT statement
    What am I missing here?

    (1) First of all I am executing the whole script in SQL*Plus, thats why I use SQL*Plus command like the SLASH Operator.
    <br>
    (2)
    SQL> VAR cur refcursor
    SQL> CREATE TABLE mytemptable AS SELECT ename table_name, deptno idtype, sal cnt FROM emp WHERE 1=0
    Table created.
    SQL> DECLARE
       sql_stmt   VARCHAR2 (200);
       tname      VARCHAR2 (30);
    BEGIN
       FOR tname IN (SELECT DISTINCT table_name
                                FROM SYS.all_tab_cols
                               WHERE owner = USER AND table_name LIKE '%EMP')
       LOOP
          sql_stmt :=
                'insert into myTempTable select '''
             || tname.table_name
             || ''' table_name, deptno IdType, count(*) cnt from '
             || tname.table_name
             || ' group by deptno';
          EXECUTE IMMEDIATE sql_stmt;
       END LOOP;
       OPEN :cur FOR
          SELECT *
            FROM mytemptable;
       EXECUTE IMMEDIATE 'drop table myTempTable';
    END;
    PL/SQL procedure successfully completed.
    SQL> PRINT :cur
    TABLE_NAME     IDTYPE        CNT
    EMP                10          3
    EMP                20          5
    EMP                30          6
    3 rows selected.Since we DROP the table at the end we don't need an explicit commit, since all DDL statement will implicitly commit anyway.

  • Not getting set up record in Set up tables

    Hi ,
    I am able to see revaluation records in MB5B (Stock on posting date) for one perticular posting date. but this record i am not able to see in Bw side ,even its not avail in RSA3 level.
    So wat cud be the reason its avail in data base table but not filled in Set up tables ?
    regards
    raja

    Hi vishwanath,
    This is bcoz of 
    U have not done the delte delta queue( In LBWQ "MCEX03" entries).
    first U have to delete the MCEX03 Entries ---> then only u have to do fill the setup tables.
    follow these steps:
    1.First goto LBWQ --> delete the MCEX03 Entries.
    2.Refresh .
    3.Check the datasources in RSA3 ( Datasources Should show Zero Records )
    4.Do the Fill setup table.
    5.Then check once again LBWQ ---> It should not have MCEX03 Entries.
    6. Check Datasources in RSA3 ( Datasources should have data).
    7.Do replication & Extarct data.
    for ur problem this is the solution ( I have faced this).
    Thanks,
    kiran.

  • How to extract data into the set-up table for 2LIS_06_INV LIS structure

    We are using ECC 6.0 and SAP BI NW 2004S. I activated the 2LIS_06_INV  (Invoice Verification) structure. Interestingly, I don't see any Events under this structure  (MC06M_0ITM) - my understanding is the events usually determine what type of data is generated for a given structure.
    I see Invoice Verification when I use the Inventory Management -Perform Setup  option when doing the set-up tables. However, when I use this option, I get a message saying " No extraction structure active or no BW connected".
    Can someone list the pre-requisites and the steps to load the set-up table for the 2LIS_06_INV structure.
    Thanks,
    Sanjay

    1: RSA5 Activate Data Source
    2: LBWE Activate datasource again
    3: SBIW Fill setup table 'Settings for Application-Specific DataSources (PI)'-> 'Initialization'->'Filling in the Setup Table'->'Application-Specific Setup of Statistical Data'->'Invoice Verification - Execute Reconstruction'

  • Error while filling the set up table for 2LIS_11_VAITM

    Dear Experts,
    i am facing an error while filling the set up table for 2LIS_11_VAITM in the source system.
    The error was "Company code for sales org 9000 does not exist (document 1326)".
    when i checked the document in the Header table VBAK,company code was blank for the document .
    Company code was there in the selections for the setup table.
    The error means that company code should be maintained for the sales document as per my knolwedge.i think document was not created properly in the source system.
    Could you please help me out to over come the problem.
    Regards,
    Sunil...

    in the selection screen for filling the setup table, there is a parameter No. tolerated faulty documents
    you'll have to enter at least '1' here in your case
    it might be that there are other erronuous records, in that case it will crash again, and you'll have to increase the value for this parameter
    (under "normal" circumstances, all documents in the source should be correct... but apparently it's possible to circumvent this)

  • Error while filling set up table SD-Billing Documents - Perform Setup

    Hello friends,
    while filling set up table of SD-Billing Documents - Perform Setup
    i am getting error 'TSV_TNEW_PAGE_ALLOC_FAILED' . i am executing this job in background process . Please suggest me a solution .
    Regards
    Nilesh Vakil

    Hi,
    Try some other LO extractor in RSA3 to check if you get the same error.
    If you dont then it may not be system memory issue, check if thr is some user exit written for the extractor.The problem could be in the user exit whr the internal table may get overflown and lead to the page overfloen dump.
    If this is also not the case try reducing the packet size.
    Let me know if the issue is resolved.
    Regards,
    Ashwin

  • Error while filling up set up tables in SAP R/3

    Experts,
    I trying to fill up set up tables for application component 11 (SD) for my LO extraction. It gives me the following error
    <b>Error determining rate: foreign curr. RMB local curr. AUD date 06/19/2007 (doc. 100025787)</b>
    when I check in RSA3, I have data before 06/19/2007 but nothing is filled up after that.
    Does anyone know how to fix this?
    Thanks
    Ashwin

    Hi Ashwin,
    Did you get the solution for issue(filling the setup tables for applicaion component 11)
    "Error determining rate: foreign curr. RMB local curr. AUD date 06/19/2007 (doc. 100025787)"
    thanks in advance.
    Bhaskar.

  • How to set dynamic table name in sql query?

    I want set dynamic table name by parameter in sql query,just like:
    select * from :tbname
    but run report is error,BI P report table name is invalidation.
    What can i do? Thanks!

    Hi,
    that's only possible inside a data template with a lexical parameter.
    Regards
    Rainer

  • How to copy a set of tables from a database to another periodically?

    We have a 4 node RAC primary database(10.2.0.2) with a physical standby(10.2.0.2) on our production site. Offlate we noticed that one of the applications(APP2) is causing heavy loads due large data downloads on the primary database servers. Our primary database has 2 schemas,
    1) one being the main schema with all objects, (USER1)
    2) and the other has views that query some set of tables from the main schema. (USER2)
    The application APP2 uses USER2 views to query and download huge data periodically. We need to be able to give accurate data results to APP2, but in the same time take off the load from the database, as APP2 is not our main application.
    We would like to know if there are any cost effective options in oracle to do this, and if so, what is the best option? Anyone has any experience setting up something like this before?
    We have thought of creating another 10.2.0.2 database on a different server and giving it regular updates(like data feeds) from the current database. The current database data changes quiet often, so the data feeds would have to be done often to keep the data current on the new database. So, we are not exactly sure how to go about it. Would a COPY command help?
    Please advice.

    user623066 wrote:
    Our 4 node RAC is already busy with our main application, which has its connections spread across all 4 nodes.
    Our main applications services are the same on all nodes and use all 4 nodes in the same way.
    There are some other utilities that we run from one of the app servers that connect to only 1 of the nodes.
    APP2 uses all 4 servers, which is again controlled by connection pooling and distributes the load.Wouldn't separate services be more beneficial here? If APP2 is locked down to one node during normal operation, that ensures that other connections aren't going to be competing for hardware with APP2 on 3 of the 4 nodes. If APP2 is generating less than 25% of the total load, you can let the other applications use whatever hardware resources are left idle on the node APP2 is locked down to.
    By Large data downloads, I meant both increase in network traffic and the CPU load on the database nodes.
    We are already using resouce manager to limit the resources allocated to USER2 that APP2 uses.
    And we have also limited the large downloads to take place in the early hours of the day when the traffic from our main application is less.
    But this has still not been optimal for the usage requirements for APP2. APP2 is also doing queries all through the day, but has a limit for the number of rows downloaded during peak hours.Can you explain a bit more about why using Resource Manager hasn't been sufficient? That's normally a pretty good way to prevent one hungry user from drastically affecting everyone else. Perhaps you just need to tweak the configuration here.
    Logical Standby seems a good option. But we need to keep our physical standby in place. Is it possible to have a logical standby and a physical standby? (ofcourse on separate servers)Sure. You can have as many standby servers of whatever type you'd like.
    Could we use a COPY command to copy data for the set of tables to a new database? Or is that also a complex option?You could, yes. COPY is a SQL*Plus command that has been depricated for copying data between Oracle databases for quite a while. It only works from SQL*Plus and would only be designed for one-time operations (i.e. there is no incremental COPY command). I can just about guarantee that's not what you want here.
    How do materialized views work? Wouldn't they still reside on the main database? Or is it possible to have remote materialized views?You probably don't want materialized views, but if you decide to go down that path
    - You'd create materialized view logs on the base tables to track changes
    - You'd create materialized views on the destination database that select data over a database link back to the source database
    - You'd put those materialized views into one or more refresh groups that are scheduled to refresh periodically
    - During a refresh, assuming incremental refreshes, the materialized view logs would be read and applied to the materialized views on the destination system to update the materialized views.
    Justin

  • JOIN ON 2 different sets of table depending on the result of first set

    <br>
    I have a query where it returns results. I want to join this query to
    2 different sets of table depending upon the first set has a result or not.
    if first set didnt had a results or records then check for the second set.
    SELECT 
    peo.email_address,
    r.segment1 requistion_num,
    to_char(l.line_num) line_num,
    v.vendor_name supplier, 
    p.CONCATENATED_SEGMENTS category,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
    pe.full_name requestor,
    l.item_description,
    pr.segment1 project_num,
    t.task_number,
    c.segment1,
    c.segment2
    FROM po_requisition_headers_all r,
         po_requisition_lines_all l,  
    (SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date  FROM
    (SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
    OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
    FROM po_req_distributions_all pod) WHERE rn = 1) d,
    gl_code_combinations c,
    POR_CATEGORY_LOV_V p,
    per_people_v7 pe,
    PA_PROJECTS_ALL pr,
    PA_TASKS_ALL_V t,
    ap_vendors_v v,
    WHERE  d.creation_date >= nvl(to_date(:DATE_LAST_CHECKED,
    'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
    AND
    l.requisition_header_id = r.requisition_header_id
    AND l.requisition_line_id = d.requisition_line_id
    AND d.code_combination_id = c.code_combination_id
    AND r.APPS_SOURCE_CODE = 'POR'
    AND l.category_id = p.category_id
    AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
    AND l.to_person_id = pe.person_id
    AND pr.project_id(+) = d.project_id
    AND t.project_id(+) = d.project_id
    AND t.task_id(+) = d.task_id
    AND v.vendor_id(+) = l.vendor_id
    and r.requisition_header_id in(
    SELECT requisition_header_id FROM po_requisition_lines_all pl                    
    GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
    group by
    peo.email_address,
    r.REQUISITION_HEADER_ID,
    r.segment1 ,
    to_char(l.line_num) ,
    v.vendor_name, 
    p.CONCATENATED_SEGMENTS ,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
    pe.full_name ,
    l.item_description,
    c.segment1,
    c.segment2,
    pr.segment1 ,
    t.task_number
    <b>I want to join this query with this first set </b>
    SELECT b.NAME, c.segment1 CO, c.segment2 CC,
              a.org_information2 Commodity_mgr,
              b.organization_id, p.email_address
         FROM hr_organization_information a, hr_all_organization_units b, pay_cost_allocation_keyflex c, per_people_v7 p
        WHERE a.org_information_context = 'Financial Approver Information'
          AND a.organization_id = b.organization_id
           AND b.COST_ALLOCATION_KEYFLEX_ID = c.COST_ALLOCATION_KEYFLEX_ID
           and a.ORG_INFORMATION2 = p.person_id
          AND NVL (b.date_to, SYSDATE + 1) >= SYSDATE
          AND b.date_from <= SYSDATE;
    <b>if this doesnt return any result then i need to join the query with the 2nd set</b>
    select lookup_code, meaning, v.attribute1 company, v.attribute2 cc,
                decode(v.attribute3,null,null,p1.employee_number || '-' || p1.full_name) sbu_controller,
                decode(v.attribute4,null,null,p2.employee_number || '-' || p2.full_name) commodity_mgr
                from fnd_lookup_values_vl v,
                per_people_v7 p1, per_people_v7 p2
                where lookup_type = 'BIO_FIN_APPROVER_INFO'
                  and v.attribute3 = p1.person_id(+)
                and v.attribute4 = p2.person_id(+)
                order by lookup_code
    How do i do it?
    [pre]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    <br>
    I have hard coded the 2 jon sets into one using UNION ALL but if one record exists in both sets how would i diferentiate between the 2 sets.
    COUNT(*) will only give the total records.
    if there r total 14
    suppose first set gives 12 records
    second set gives 4 records.
    But i want only 14 records which could 12 from set 1 and 2 from set 2 since set1  and set2 can have common records.
    SELECT 
    peo.email_address,
    r.segment1 requistion_num,
    to_char(l.line_num) line_num,
    v.vendor_name supplier, 
    p.CONCATENATED_SEGMENTS category,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
    pe.full_name requestor,
    l.item_description,
    pr.segment1 project_num,
    t.task_number,
    c.segment1,
    c.segment2
    FROM po_requisition_headers_all r,
         po_requisition_lines_all l,  
    (SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date  FROM
    (SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
    OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
    FROM po_req_distributions_all pod) WHERE rn = 1) d,
    gl_code_combinations c,
    POR_CATEGORY_LOV_V p,
    per_people_v7 pe,
    PA_PROJECTS_ALL pr,
    PA_TASKS_ALL_V t,
    ap_vendors_v v,
    WHERE  d.creation_date >= nvl(to_date(:DATE_LAST_CHECKED,
    'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
    AND
    l.requisition_header_id = r.requisition_header_id
    AND l.requisition_line_id = d.requisition_line_id
    AND d.code_combination_id = c.code_combination_id
    AND r.APPS_SOURCE_CODE = 'POR'
    AND l.category_id = p.category_id
    AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
    AND l.to_person_id = pe.person_id
    AND pr.project_id(+) = d.project_id
    AND t.project_id(+) = d.project_id
    AND t.task_id(+) = d.task_id
    AND v.vendor_id(+) = l.vendor_id
    and r.requisition_header_id in(
    SELECT requisition_header_id FROM po_requisition_lines_all pl                    
    GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
    group by
    peo.email_address,
    r.REQUISITION_HEADER_ID,
    r.segment1 ,
    to_char(l.line_num) ,
    v.vendor_name, 
    p.CONCATENATED_SEGMENTS ,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
    pe.full_name ,
    l.item_description,
    c.segment1,
    c.segment2,
    pr.segment1 ,
    t.task_number
    UNION ALL
    SELECT 
    r.segment1 requistion_num,
    to_char(l.line_num) line_num,
    v.vendor_name supplier, 
    p.CONCATENATED_SEGMENTS category,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
    pe.full_name requestor,
    l.item_description,
    pr.segment1 project_num,
    t.task_number,
    c.segment1,
    c.segment2
    FROM po_requisition_headers_all r,
         po_requisition_lines_all l,  
    (SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date  FROM
    (SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
    OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
    FROM po_req_distributions_all pod) WHERE rn = 1) d,
    gl_code_combinations c,
    POR_CATEGORY_LOV_V p,
    per_people_v7 pe,
    PA_PROJECTS_ALL pr,
    PA_TASKS_ALL_V t,
    ap_vendors_v v,
    fnd_lookup_values_vl flv,
    per_people_v7 p1,
    per_people_v7 p2
    WHERE  d.creation_date >= nvl(to_date('11-APR-2008',
    'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
    AND
    l.requisition_header_id = r.requisition_header_id
    AND l.requisition_line_id = d.requisition_line_id
    AND d.code_combination_id = c.code_combination_id
    AND r.APPS_SOURCE_CODE = 'POR'
    AND l.org_id = 141
    AND l.category_id = p.category_id
    AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
    AND l.to_person_id = pe.person_id
    AND pr.project_id(+) = d.project_id
    AND t.project_id(+) = d.project_id
    AND t.task_id(+) = d.task_id
    AND v.vendor_id(+) = l.vendor_id
    AND flv.attribute1=c.segment1
    AND flv.attribute2=c.segment2
    AND flv.lookup_type = 'BIO_FIN_APPROVER_INFO'
    and flv.attribute3 = p1.person_id(+)
    and flv.attribute4 = p2.person_id(+)
    and r.requisition_header_id in(
    SELECT requisition_header_id FROM po_requisition_lines_all pl                    
    GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
    group by
    r.REQUISITION_HEADER_ID,
    r.segment1 ,
    to_char(l.line_num) ,
    v.vendor_name, 
    p.CONCATENATED_SEGMENTS ,
    to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
    TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
    pe.full_name ,
    l.item_description,
    c.segment1,
    c.segment2,
    pr.segment1 ,
    t.task_number

  • Dataset from two set of tables based on condition

    I have two queries that will return same columns from two different set of tables ( column mapping has been taken care of). The return type is out ref cursor. (P_SUPPLY_REORDER )
    Query 1-SO
    select
    so.SMO_NO,
    so.SPLY_ORD_DT,
    so.fk_CUST_ID as CUST_ID,
    so.CUST_PO_NO,
    so.ATTENTION_NAME,
    sum(sol.SPLY_ORD_QTY) as SPLY_ORD_QTY,
    --sum(sol.sply_shp_qty),
    so.ST_NAME,
    so.ADDR_LN_2,
    so.ADDR_LN_3,--sta.SHIP_TO_ADDRESS_LINE_3,
    so.CITY_NAME,
    so.ST_TERR_CD,
    so.ZIP_CD,
    so.SPCL_SHP_INSTR_TXT,
    so.SHP_CNFRM_DT
    ,XCOM_ORDER_NO
    from
    supply_order so,
    supply_order_line sol,
    XCOM_ORDER_HEADER
    where
    so.FK_CUST_ID =in_cust_id
    and so.pc_ord_no = sol.fk_pc_ord_no and
    XCOM_ORDER_HEADER.FK_PC_ORD_NO = so.PC_ORD_NO
    group by so.SMO_NO, so.SPLY_ORD_DT, so.fk_CUST_ID,
    so.CUST_PO_NO, so.ATTENTION_NAME,
    so.ST_NAME, so.ADDR_LN_2, so.ADDR_LN_3, so.CITY_NAME, so.ST_TERR_CD,
    so.ZIP_CD, so.SPCL_SHP_INSTR_TXT, so.SHP_CNFRM_DT, XCOM_ORDER_NO;
    Query-2 Xcom
    select
    null as sMO_NO,
    xso.created_date as SPLY_ORD_DT,
    xso.fk_cust_id as cust_id,
    cust.cust_po_no as cust_PO_NO
    ,(sta.SHIP_TO_ATTN_FIRST_NAME||''||sta.SHIP_TO_ATTN_LAST_NAME) as attention_name,
    xsol.CARTONS_ORDERED as SPLY_ORD_QTY,
    --sum(sol.sply_shp_qty),
    sta.SHIP_TO_ADDRESS_LINE_1 as ST_NAME,
    sta.SHIP_TO_ADDRESS_LINE_2 as ADDR_LN_2,
    --sta.SHIP_TO_ADDRESS_LINE_3,
    NULL as ADDR_LN_3,
    sta.ship_to_city as CITY_NAME,
    sta.SHIP_TO_STATE as ST_TERR_CD,
    sta.SHIP_TO_POSTAL_CODE as ZIP_CD,
    sta.SHIPPING_INSTRUCTIONS as SPCL_SHP_INSTR_TXT,
    null  as SHP_CNFRM_DT,
    xso.XCOM_ORDER_NO as XCOM_ORDER_NO
    from
    XCOM_ORDER_HEADER xso,
    XCOM_ORDER_LINES xsol,
    customer cust,
    ship_to_address sta
    where
    cust.cust_id = xso.fk_cust_id and
    sta.fk_cust_id = xso.fk_cust_id
    and xso.FK_CUST_ID =in_cust_id
    and xso.FK_PC_ORD_NO is null
    and xso.xcom_order_no = xsol.fk_xcom_order_no;Now the requirement is
    One of four conditions are possible for each Supply Reorder Number:
    •     Both table queries return no records
    -     Populate all the P_SUPPLY_REORDER output fields with nulls
    •     SUPPLY_ORDER returns a record, but XCOM_ORDER_HEADER returns no records
    -     Populate output fields with values from the join of SUPPLY_ORDER and SUPPLY_ORDER_LINE.
    •     SUPPLY_ORDER returns no records, but XCOM_ORDER_HEADER returns one record
    -     Populate output fields with values from the join of XCOM_ORDER_HEADER and XCOM_ORDER_LINES.
    •     SUPPLY_ORDER returns a record, and XCOM_ORDER_HEADER returns a record; find out the latest order by comapring max(SPLY_ORD_DT)
    from SUPPLY_ORDER with max(CREATED_DATE) from XCOM_ORDER_HEADER.
    -     If the latest order is in SUPPLY_ORDER, then populate output fields with values from the join of SUPPLY_ORDER and SUPPLY_ORDER_LINE.
    -     If order dates are equal from both join results, then populate output fields with values from the join of SUPPLY_ORDER and SUPPLY_ORDER_LINE.
    -     If the latest order is in XCOM_ORDER_HEADER, then populate output fields with values from the join of XCOM_ORDER_HEADER and XCOM_ORDER_LINES.
    Question is how can we switch over the queries to pull respective dataset based on these conditions ( checking that which table join is going to return a row and then based upon latest order if both tables return a row) and all this logic as part of single SQL statement that is returned as OUT Ref Cursor.
    Your help will be really appreciated. Thanks much in advance.

    It would be easier if you would supply a dataset we can work with. And I assume the queries are called within a other cursor which returns in_cust_id right?
    The best/fasters way would be to say goodbye to nested cursors.
    You need:
    the in_cust_id_query
    extend SUPPLY_ORDER with a new field
    max(CREATED_DATE) over (partiontion by cust_id) max_created_dateextend XCOM_ORDER_HEADER with a new field
    max(SPLY_ORD_DT)over (partiontion by cust_id) max_sply_ord_dateThen you make a 3 level SELECT
    the innerst () join all 3 sources
    the second level names the rule
    the outermost level applies the rule to each field:
    SELECT
    -- case statement to each field:
    CASE rule
    WHEN 1 THEN null
    WHEN 2 THEN s_smo_no
    WHEN 3 .... END  SMO_NO,
    SELECT
    -- check all rules:
    case
    --• Both table queries return no records
    when s_in_cust_id IS NULL and h_in_cust_id IS NULL then 1
    --• SUPPLY_ORDER returns a record, but XCOM_ORDER_HEADER returns no records
    when s_in_cust_id IS NOT NULL and h_in_cust_id IS NULL then 2
    --• SUPPLY_ORDER returns no records, but XCOM_ORDER_HEADER returns one record
    when s_in_cust_id IS NULL and h_in_cust_id IS NOT NULL then 3
    ELSE 4 end rule_id,
    i.*
    (SELECT c.in_cust_id,
                 s.*, (with alias of course eg. s_...)
                h.*, (with alias of course  e.g. h_...)
    from  in_cust_id_query c
      LEFT JOIN SUPPLY_ORDER s
       on (c.in_cust_id = s.in_cust_id)
    LEFT XCOM_ORDER_HEADER h
      on (c.in_cust_id = h.in_cust_id) i;I hope you get it.
    Of course you can do what you probable currently do. Open both cursors one by one and compare the results. But when you have 10000 in_cust_ids you start 20000 queries and you have a lot of code.
    This method I'm showing need these roles coded as well, but you'll have ONE single query and that will be as fast as ORACLE can be. And I believe it's easier to read too. (But not simple!)
    If you need more help, try to supply an example we can work on.
    -- andy

  • How to set the tables parameter to a BAPI using adaptive RFC model

    Hi,
    I have a BAPI ZBAPI that has a tables parameter ZPernr of type ZTable with two elements pernr and name.
    I used adaptive RFC model and created model classes for the BAPI.
    My question how do i set list of pernr and name to the tables parameter in the webdynpro for java.
    I have the following methods available
    ZBAPI input = new ZBAPI();
    1.ZTable table = new ZTable();
    table.setPernr(pernr);
    table.setname(name);
    input.addZPernr(table);
    how do i set the table parameter for multiple pernr and name
    2.
    add all the pernr and name to list and set that list to input in the following way
    input.setZPernr(list) and list of typecom.sap.aii.proxy.framework.core. AbstractList
    out of two methods mentioned which one is correct.
    Please let me know the solution
    Thanks
    Bala Duvvuri

    Hi Bala,
                In your code:
    List pernrList = new ArrayList();
    pernrList  should be declared as the  type of class HROBJECT   ( The structure name in the model  e.g. the node PERFWARN type is Zhrecmfm_002_Perfrat_Warning_Input )
    Try this:
    HROBJECT  pernrList = new HROBJECT();
    pernrList.setPernr(pernr);
    input.setPernr_List(pernrList);
    // call BAPI execute method
    If you want to pass list of pernrs, you can pass in alternate way
    IPrivate<viewname>.I<nodeame>Node prNode = wdContext.node<node>();
    IPrivate<viewname>.I<nodeame>Element prEle;
    for (int i = 0; i < table.size(); i++) {
    HROBJECT  pernrList = new HROBJECT();
    pernrList.setPernr(pernr); // Set the pernr from current record
    prEle = prNode.create<nodeame>Element(pernrList);
    prNode.addElement(prEle);     
    // call BAPI execute method
    Once I execute the BAPI how do i iterate thru second table parameter PERNR_PERFRAT to get pernr and performance rating text
    Say your node name is  PERNR_PERFRAT
    for (int i = 0; i < wdContext.nodePERNR_PERFRAT().size(); i++) {
    wdContext.nodePERNR_PERFRAT().getPERNR_PERFRATElementAt(i).getPernr();
    wdContext.nodePERNR_PERFRAT().getPERNR_PERFRATElementAt(i).getPerfrat();
    Regards,
    Siva

Maybe you are looking for