Query on DP96 and table COVP

Hi I'm using table COVP and running transaction DP96 to bill its cost (create DEBIT MEMO).. on which table can i find whether the entries from table COVP (Document Number) is already billed or not....thanks
Edited by: Alvin Rosales on Jul 6, 2009 7:43 AM

If the user has been granted select on the view the user does not need to have any privileges on the underlying table(s).
That's kinda the idea. Views can be used to hide certain information from the user.

Similar Messages

  • Performance issue in BI due to direct query on BKPF and BSEG tables

    Hi,
    We had a requirement that FI document number fieldshould be extracted in BI.
    Following code was written which has the correct logic but performance is bad.
    It fetched just 100 records in more than 4-5 hrs.
    The reason is there was a direct qury written on BSEG and BKPF tables(without WHERE clause).
    Is there any way to improve this code like adding GJAHR field  in where clause? I dont want to change the logic.
    Following is the code:
    WHEN '0CO_OM_CCA_9'." Data Source
        TYPES:BEGIN OF ty_bkpf,
        belnr TYPE bkpf-belnr,
        xblnr TYPE bkpf-xblnr,
        bktxt TYPE bkpf-bktxt,
        awkey TYPE bkpf-awkey,
        bukrs TYPE bkpf-bukrs,
        gjahr TYPE bkpf-gjahr,
        AWTYP TYPE bkpf-AWTYP,
        END OF ty_bkpf.
        TYPES : BEGIN OF ty_bseg1,
        lifnr TYPE bseg-lifnr,
        belnr TYPE bseg-belnr,
        bukrs TYPE bseg-bukrs,
        gjahr TYPE bseg-gjahr,
        END OF ty_bseg1.
        DATA: it_bkpf TYPE STANDARD TABLE OF ty_bkpf,
        wa_bkpf TYPE ty_bkpf,
        it_bseg1 TYPE STANDARD TABLE OF ty_bseg1,
        wa_bseg1 TYPE ty_bseg1,
        l_s_icctrcsta1 TYPE icctrcsta1.
        "Extract structure for Datasoure 0co_om_cca_9.
        DATA: l_awkey TYPE bkpf-awkey.
        DATA: l_gjahr1 TYPE gjahr.
        DATA: len TYPE i,
        l_cnt TYPE i.
        l_cnt = 10.
        tables : covp.
        data : ref_no(20).
        SELECT lifnr
        belnr
        bukrs
        gjahr
        FROM bseg
        INTO TABLE it_bseg1.
        DELETE ADJACENT DUPLICATES FROM it_bseg1 COMPARING belnr gjahr .
        SELECT belnr
        xblnr
        bktxt
        awkey
        bukrs
        gjahr
        AWTYP
        FROM bkpf
        INTO TABLE it_bkpf.
        IF sy-subrc EQ 0.
          CLEAR: l_s_icctrcsta1,
          wa_bkpf,
          l_awkey,
          wa_bseg1.
          LOOP AT c_t_data INTO l_s_icctrcsta1.
            MOVE l_s_icctrcsta1-fiscper(4) TO l_gjahr1.
          select single AWORG AWTYP INTO CORRESPONDING FIELDS OF COVP FROM COVP
          WHERE belnr = l_s_icctrcsta1-belnr.
          if sy-subrc = 0.
              if COVP-AWORG is initial.
           concatenate l_s_icctrcsta1-refbn '%' into ref_no.
                  READ TABLE it_bkpf INTO wa_bkpf WITH KEY awkey(10) =
                  l_s_icctrcsta1-refbn
                  awtyp = COVP-AWTYP
                  gjahr = l_gjahr1.
            IF sy-subrc EQ 0.
              MOVE wa_bkpf-belnr TO l_s_icctrcsta1-zzbelnr.
              MOVE wa_bkpf-xblnr TO l_s_icctrcsta1-zzxblnr.
              MOVE wa_bkpf-bktxt TO l_s_icctrcsta1-zzbktxt.
              MODIFY c_t_data FROM l_s_icctrcsta1.
              READ TABLE it_bseg1 INTO wa_bseg1
              WITH KEY
              belnr = wa_bkpf-belnr
              bukrs = wa_bkpf-bukrs
              gjahr = wa_bkpf-gjahr.
              IF sy-subrc EQ 0.
                MOVE wa_bseg1-lifnr TO l_s_icctrcsta1-lifnr.
                MODIFY c_t_data FROM l_s_icctrcsta1.
                CLEAR: l_s_icctrcsta1,
                wa_bseg1,
                l_gjahr1.
              ENDIF.
            ENDIF.
                ELSE. " IF AWORG IS NOT BLANK -
                concatenate l_s_icctrcsta1-refbn COVP-AWORG into ref_no.
                READ TABLE it_bkpf INTO wa_bkpf WITH KEY awkey(20) =
                ref_no
                awtyp = COVP-AWTYP
                gjahr = l_gjahr1.
            IF sy-subrc EQ 0.
              MOVE wa_bkpf-belnr TO l_s_icctrcsta1-zzbelnr.
              MOVE wa_bkpf-xblnr TO l_s_icctrcsta1-zzxblnr.
              MOVE wa_bkpf-bktxt TO l_s_icctrcsta1-zzbktxt.
              MODIFY c_t_data FROM l_s_icctrcsta1.
              READ TABLE it_bseg1 INTO wa_bseg1
              WITH KEY
              belnr = wa_bkpf-belnr
              bukrs = wa_bkpf-bukrs
              gjahr = wa_bkpf-gjahr.
              IF sy-subrc EQ 0.
                MOVE wa_bseg1-lifnr TO l_s_icctrcsta1-lifnr.
                MODIFY c_t_data FROM l_s_icctrcsta1.
                CLEAR: l_s_icctrcsta1,
                wa_bseg1,
                l_gjahr1.
              ENDIF.
            ENDIF.
               endif.
          endif.
            CLEAR: l_s_icctrcsta1.
            CLEAR: COVP, REF_NO.
          ENDLOOP.
        ENDIF.

    Hello Amruta,
    I was just looking at your coding:
    LOOP AT c_t_data INTO l_s_icctrcsta1.
    MOVE l_s_icctrcsta1-fiscper(4) TO l_gjahr1.
    select single AWORG AWTYP INTO CORRESPONDING FIELDS OF COVP FROM COVP
    WHERE belnr = l_s_icctrcsta1-belnr.
    if sy-subrc = 0.
    if COVP-AWORG is initial.
    concatenate l_s_icctrcsta1-refbn '%' into ref_no.
    READ TABLE it_bkpf INTO wa_bkpf WITH KEY awkey(10) =
    l_s_icctrcsta1-refbn
    awtyp = COVP-AWTYP
    gjahr = l_gjahr1.
    Here you are interested in those BKPF records that are related to the contents of c_t_data internal table.
    I guess that this table does not contain millions of entries. Am I right?
    If yes, the the first step would be to pre-select COVP entries:
    select BELNR AWORG AWTYP into lt_covp from COVP
    for all entries in c_t_data
    where belnr = c_t_data-belnr.
    sort lt_covp by belnr.
    Once having this data ready, you build an internal table for BKPF selection:
    LOOP AT c_t_data INTO l_s_icctrcsta1.
      clear ls_bkpf_sel.
      ls_bkpf_sel-awkey(10) = l_s_icctrcsta1-refbn.
      read table lt_covp with key belnr = l_s_icctrcsta1-belnr binary search.
      if sy-subrc = 0.
        ls_bkpf_sel-awtyp = lt_covp-awtyp.
      endif.
      ls_bkpf_sel-gjahr = l_s_icctrcsta1-fiscper(4).
      insert ls_bkpf_sel into table lt_bkpf_sel.
    ENDLOOP.
    Now you have all necessary info to read BKPF:
    SELECT
    belnr
    xblnr
    bktxt
    awkey
    bukrs
    gjahr
    AWTYP
    FROM bkpf
    INTO TABLE it_bkpf
    for all entries in lt_bkpf_sel
    WHERE
      awkey = lt_bkpf_sel-awkey and
      awtyp = lt_bkpf_sel-awtype and
      gjahr = lt_bkpf_sel-gjahr.
    Then you can access BSEG with the bukrs, belnr and gjahr from the selected BKPF entries. This will be fast.
    Moreover I would even try to make a join on DB level. But first try this solution.
    Regards,
      Yuri

  • Infoset query logical database and transparent table

    Hi!
    We have an infoset with the data source logical database=PNP.
    We get some fields from the infotype 0768, P0768-PERNR, P0768-BEGDA, etc.
    Now we need add another table to make a join within infotype 0768 and table T5F99SE.
    For instance, in infotype 0768 I have one record with the fields PERNR and BEGDA and in the T5F99SE I have 3 records related to the unique record of infotype 0768, the fields of the table are PERNR, BEGDA, ACTDT and ADDAT .
    The fields values in the example can be:
    Infotype 0768: PERNR=00101800, BEGDA=20110401, DICOT=20, BACHE=1200
    Table T5F99SE:  record 1 PERNR=00101800, BEGDA=20110401, ACTDT=20110401, ADDAT=PB    E
                             record 2 PERNR=00101800, BEGDA=20110101, ACTDT=20110405, ADDAT=PC    E01
                             record 3 PERNR=00101800, BEGDA=20110401, ACTDT=20110409, ADDAT=PA    E
    The result we want get with infoset query is
    PERNR    BEGDA   DICOT  BACHE   ADDAT
    00101800 20110101 20        1200       PB    E
    00101800 20110101 20        1200       PC    E01
    00101800 20110101 20        1200       PA    E
    I would like to get the fields of the infotype and some fields of the table T5F99SE.
    Is possible do this action with ABAP modifying an infoset that already exists adding the fields of the transparent table?
    What should I do?
    Kind regards,
    Julian.

    My guess is that it would not be possible to include a transparent table into the LDBs PNP and PNPCE. Would need input from a technical expert there.
    However, instead of using the LDB, why don't you explore just using a direct table join? You may need to join PA0000, PA0001, PA0002 along with PA0768 and your other tables. An infoset can then be created on this table join.
    To go to the mode where you can create the table join, in your infoset transactions, choose 'Table join' instead of 'LDB'.

  • Slow query due to large table and full table scan

    Hi,
    We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
    We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
    to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
    We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
    Two of the full table scans are of the two large tables mentioned above.
    This is an example query:
    SELECT table1.column, table2.column, table3.column
    FROM table1
    JOIN table2 on table1.table2Id = table2.id
    LEFT JOIN table3 on table2.table3id = table3.id
    WHERE table1.id IN(
    SELECT id
    FROM (
    (SELECT a.*, rownum rnum FROM(
    SELECT table1.id
    FROM table1,
    table2,
    table3
    WHERE
    table1.table2id = table2.id
    AND
    table2.table3id IS NULL OR table2.table3id = :table3IdParameter
    ) a
    WHERE rownum <= :end))
    WHERE rnum >= :start
    Table1 and table2 are the large tables in this example. This query starts two full table scans on those tables.
    Can we avoid this? We have, what we think are, the correct indexes.
    /best regards, Håkan

    >
    Hi Håkan - welcome to the forum.
    We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
    We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
    to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
    We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
    Two of the full table scans are of the two large tables mentioned above.
    This is an example query:Firstly, please read the forum FAQ - top right of page.
    Please format your SQL using tags [code /code].
    In order to help us to help you.
    Please post table structures - relevant (i.e. joined, FK, PK fields only) in the form - note use of code tags - we can just run table create script.
    CREATE TABLE table1
      Field1  Type1,
      Field2  Type2,
    FieldN  TypeN
    );Then give us some table data - not 100's of records - just enough in the form
    INSERT INTO Table1 VALUES(Field1, Field2.... FieldN);
    ..Please post EXPLAIN PLAN - again with tags.
    HTH,
    Paul...
    /best regards, Håkan

  • Query on Creating and Populating I$ table on different condition

    Hi,
    I have a query on creating and populating I$ table on different condition.In which condition the I$ table is created??And These condition are mentioned below:
    1)*source and staging area* are on same server(i.e target is on another server)
    2)*staging area and Target* are on same server(i.e source is on another server)
    3)*source,staging area and Target* are on *3 different* server
    4)source,staging area and Target are on same server
    Thanks

    I am not very much clear about your question. Still trying my best to clear it out.
    In your all above requirement I$ table will be created.
    If staging same as target ( One database,one user) then all temp tables will be created under this user
    If staging is different than target ( One database,two user (A,B)) then all temp tables will be created under this user A (lets consider) and data will be inserted to the target table that is present in user B
    If staging is different than target ( Two database,two user (A1,A2), not recommended architecture) then all temp tables will be created under this user A1 (database A1) and data will be inserted to the target table that is present in user A2 (database A2)
    If source,staging,target will under one database then No LKM is required,IKM is sufficient to load the data into target. Specifically for this you can see one example given by Craig.
    http://s3.amazonaws.com/Ora/ODI-Simple_SELECT_and_INSERT-interface.swf
    Thanks.

  • Execute query in background and write it's content in transparent table

    Hi,
    Is there an easy way to execute one query in background and write it's content in transparent table?
    Thanks,

    Hello,
    Yes you can do this in so mamy ways...
    GOTO RSCRM_REPORT -> select your query -> click on the extract button and set the parameters...table or file Execute.
    Create an APD in RSANWB select your query and you can store the result in ODS table...cube...
    Create a reporting agent and store the result.
    Regards,
    Pavan.

  • Query results and table contents does not match

    Hi Experts,
    This is regarding a simple select Qyery that is not working.
    POSNR has a conversion exit at the data element level.
    After using the conversion exit, I am putting a select on table PRTE by passing  converted POSNR.
    CODE given below:
    CALL FUNCTION 'CONVERSION_EXIT_ABPSP_OUTPUT'
    EXPORTING
       input         = w_anla-posnr
    IMPORTING
       OUTPUT        = l_posnr
    select single pstrt pende
    from prte into w_prte
    where posnr = l_posnr.
    The Results fetched (two dates PSTRT and PENDE)by the Query Differs from the table entry.
    Kindly guide me in the same..
    Thanks in advance..
    Deepak
    Edited by: Deepak  KM on Sep 25, 2008 10:00 AM

    HI in Fm ,you use same variable in input and output either
    w_anla-posnr or
    i_posnr

  • Odd Behaviour: Query and Table

    I have odd occurrences within my code and need an explanation
    to one and help with the other.
    First of all, I have a query within which I am using the
    group attribute, and then I am using a counter to control the
    behaviour of the output within a table. The problem is that when I
    use currentrow attribute with my query which returns seven fields
    with on of them being used for the grouping, the currentrow counter
    returns 1 through six for the non grouped fields, yet when I use a
    counter to count the rows and use a MODULUS operator to create a
    conditional statement, I get the right result with a MOD 7 rather
    than a MOD 6. Why is coldfusion counting seven fields (when I use
    the MOD operator) though currentrow counts 6? (Code is attached and
    numbered 1)
    Secondly, I am outputing the results of my query using two
    different tables as the information for the header is not easily
    included within my query. So I have two queries, one displaying the
    header and the other the query results.
    Now because they are related I wanted to align the two table
    one on top of the other (this was successful) my problem is that
    thought I define the same table with and subsequently the same with
    for respective fields in both tables (table width is 980 pixels
    with the first column being assigned 182 pixels and the rest of the
    columns being assigned 136 pixels each. However the output shows
    that the tables and the columns are not perfectly aligned and they
    do not have the same size (why is this when I have hard coded the
    width of the columns?). How do I get the two tables to align
    perfectly? (code for tables is numbers 2). TIA

    Go to SQ03. Check the Authorizations for both the user groups. Compare and based on the differences find out what is causing the error.
    SQ03 --> Enter user group ..> click Assign infoset to user group...
    Assign tht infoset to user BB.

  • Query on ABAP List Viewer and Table Control?

    Hi all,
    I was trying to solve the exercises in these areas....but was unable to do as some of the concepts were not clear to me as I'm new to this field.
    So Can any one help me out in giving me the Notes or attachments on the ABAP List Viewer(A.L.V.) and Table Control.
    My ID: [email protected]
    Waiting for a reply...
    A New Entrant in ABAP.
    Message was edited by:
            saikumar b

    Hi saikumar,
    I just start to work with Abap too. All links i Know:
    http://www.erpgenie.com/abap/controls/alvgrid.htm
    http://www.abapfans.hpg.ig.com.br/links.htm
    http://abap4.tripod.com/index..html
    http://paginas.terra.com.br/educacao/abap/
    http://www.sdn.sap.com/
    http://www.sap-img.com/
    http://www.planetsap.com/Tips_and_Tricks.htm
    http://www.abap4.com.br/
    http://www.erpgenie.com/sap/abap/index.htm
    Good lucky

  • Using case when statement in the select query to create physical table

    Hello,
    I have a requirement where in I have to execute a case when statement with a session variable while creating a physical table using a select query. let me explain with an example.
    I have a physical table based on a select table with one column.
    SELECT 'VALUEOF(NQ_SESSION.NAME_PARAMETER)' AS NAME_PARAMETER FROM DUAL. Let me call this table as the NAME_PARAMETER table.
    I also have a customer table.
    In my dashboard that has two pages, Page 1 contains a table with the customer table with column navigation to my second dashboard page.
    In my second dashboard page I created a dashboard report based on NAME_PARAMETER table and a prompt based on customer table that sets the NAME_ PARAMETER request variable.
    EXECUTION
    When i click on a particular customer, the prompt sets the variable NAME_PARAMETER and the NAME_PARAMETER table shows the appropriate customer.
    everything works as expected. YE!!
    Now i created another table called NAME_PARAMETER1 with a little modification to the earlier table. the query is as follows.
    SELECT CASE WHEN 'VALUEOF(NQ_SESSION.NAME_PARAMETER)'='Customer 1' THEN 'TEST_MART1' ELSE TEST_MART2' END AS NAME_PARAMETER
    FROM DUAL
    Now I pull in this table into the second dashboard page along with the NAME_PARAMETER table report.
    surprisingly, NAME_PARAMETER table report executes as is, but the other report based on the NAME_PARAMETER1 table fails with the following error.
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 16001] ODBC error state: S1000 code: 1756 message: [Oracle][ODBC][Ora]ORA-01756: quoted string not properly terminated. [nQSError: 16014] SQL statement preparation failed. (HY000)
    SQL Issued: SET VARIABLE NAME_PARAMETER='Novartis';SELECT NAME_PARAMETER.NAME_PARAMETER saw_0 FROM POC_ONE_DOT_TWO ORDER BY saw_0
    If anyone has any explanation to this error and how we can achieve the same, please help.
    Thanks.

    Hello,
    Updates :) sorry.. the error was a stupid one.. I resolved and I got stuck at my next step.
    I am creating a physical table using a select query. But I am trying to obtain the name of the table dynamically.
    Here is what I am trying to do. the select query of the physical table is as follows.
    SELECT CUSTOMER_ID AS CUSTOMER_ID, CUSTOMER_NAME AS CUSTOMER_NAME FROM 'VALUEOF(NQ_SESSION.SCHEMA_NAME)'.CUSTOMER.
    The idea behind this is to obtain the data from the same table from different schemas dynamically based on what a session variable. Please let me know if there is a way to achieve this, if not please let me know if this can be achieved in any other method in OBIEE.
    Thanks.

  • Best practice for a same query against 2 different tables

    Hello all,
    I want to extract info about tablespaces storage, both permanent and temporary. For that I use 2 different cursors that do exactly the same query but against a different table (dba_data_files and dba_temp_files).
    CURSOR permanentTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_data_files
    WHERE tablespace_name = tablespaceName;
    CURSOR temporaryTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_temp_files
    WHERE tablespace_name = tablespaceName;
    First I'm bothered that I have to use 2 cursors to execute the same query against 2 different tables. Is there no another way around?
    Then I fetch the results of this cursors in 2 different loops because I didn't find a way to dynamically call the cursors. I am looking for best practice here, knowing that I will do the same parsing against the results of the 2 cursors.
    Thank you,

    Hi
    Check whether the below query is helpful or not
    select      fs.tablespace_name "Tablespace",
         fs.tempspace "Temp MB",
         df.totalspace "Total MB"
         from
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) TotalSpace
         from
         dba_data_files
         group by
         tablespace_name
         ) df,
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) tempSpace
         from
         dba_temp_files
         group by
         tablespace_name
         ) fs
         where
         df.tablespace_name = fs.tablespace_name;
    Thanks

  • What is difference between using interface as source and table as source?

    I am working on a batch flow which need several steps to populate data from source to target. For example, I need 5 interfaces to finish final data loading. I can either use interface or use temporary table as source and target for the interface 2, 3, and 4. It looks like both case will use tables no matter use interface or use temporary table. So my question is if there is any difference between these two (using interface as source or use temporary table as source)?
    Thanks

    if you use a Table as source for the intermediate process, it will create a physical temporary table i your work rep(depends on you choice) and populate the data into the table. if you use a interface as a source, just it will create a sub query instead of temporary table.
    Thanks
    nidhi

  • Mapping a BW query to a Z table in SAP ECC

    Hi BW Gurus,
    I have requirement as below and i have a solution but i want to know other alternatives for the same.. so please let me know the alternatives.
    A BI query would be created from an existing APO BI cube.
    A Z program would be created in SAP
    A Z table would be created in SAP.
    The Z program would map the fields in BI cube vis-a-hopthe fields in Z table.
    The Z program would execute the BI query through an RFC connection, pull the data and store the data in the Z table using the mapping done previously.
    This execution would be an adhoc one - not a periodical job.
    Though I have specified about executing the query through RFC option, we are open to any possible suggestion. All we need is to bring the data in the APO BI cube to the SAP Z table.
    Please let me know for any other possible alternatives.
    Thanks,
    Shailaja

    Hi Shailaja
    Just a thought on this issue..Its quite simple no need to write huge lines of ABAP program.
    Step 1:You can create an APD in RSANWB take the query as input and target as Direct access DSO for it and in the mapping take the fields & filter the fields through filter option in APD which ever you required /recomended in your R/3 table.
    Step 2: Create an OHB(openhub/infospoke) which will transfer this data to your R/3 table.
    Step3: all the step 1 & step 2 process together place it in a process chain. And in start process add Even as triggering media.
    Step4: trigger the cross system event through "BP_RAISE_EVENT" FM which means develop a program in such way that the moment when the user trigger the program in R/3  it will automatically trigger the event in BW system and perform the step1 & step 2.
    Challenges in this are 1)if you need delta data then you have to implement badi in info spoke level.
    In your case this challenge is not necessary as you specified itz an adhoc run..
    Hope its clear a little..!
    Thanks
    K M R
    "Impossible Means I M Possible"
    Winners Don't Do Different things,They Do things Differently...!.
    >
    Shailaja Badda wrote:
    > Hi BW Gurus,
    > I have requirement as below and i have a solution but i want to know other alternatives for the same.. so please let me know the alternatives.
    >
    > A BI query would be created from an existing APO BI cube.
    > A Z program would be created in SAP
    > A Z table would be created in SAP.
    > The Z program would map the fields in BI cube vis-a-hopthe fields in Z table.
    > The Z program would execute the BI query through an RFC connection, pull the data and store the data in the Z table using the mapping done previously.
    > This execution would be an adhoc one - not a periodical job.
    >  
    > Though I have specified about executing the query through RFC option, we are open to any possible suggestion. All we need is to bring the data in the APO BI cube to the SAP Z table.
    >  
    > Please let me know for any other possible alternatives.
    >
    > Thanks,
    > Shailaja

  • Problem with Cast and table operator

    Hi ,
    Am using Table and cast opertor to fetch the data from a collection using a ref cursor as follows
    OPEN test_cur for
    SELECT first_name, Last_name from table(cast( V_Test_collection as Test_Array));
    This works but when i tried to fetch the date using following query and ref cursor then i received ORA-00604 and ORA 01003.
    I used string containg the query like follows
    V_SQL:= ' SELECT first_name, Last_name from table(cast( V_Test_collection as Test_Array))';
    OPEN test_cur for V_SQL;
    I want to pass this V_SQL variable to another procedure that will use it to apply some business logic.Please help me how to resolve this problem.

    Hi, Current procedure is as follows but as per new requirement i have to pass the processed data from this procedure to a common procedure that accepts the string containing the query(Say IN_QUERY) and applies the business logic by fetching the data from this query(IN_QUERY)
    CREATE OR REPLACE PROCEDURE GET_USER_INFO_PRC(USER_ID IN NUMBER,
    TEST_CUR OUT COMMOM_PKG.OUT_REF_CUR,
    O_ERROR_CODE OUT NUMBER,
    O_ERROR_LOCATION OUT VARCHAR2,
    O_ERROR_MESSAGE OUT VARCHAR2) IS
    V_TEST_COLLECTION COMMOM_PKG.WI_INS_SEARCH_ARR_OBJ := WI_INS_SEARCH_ARR_OBJ();
    BEGIN
    Some buniess logic is applied on V_Test_collection to fetch the date in this block
    OPEN TEST_CUR FOR
    SELECT FIRST_NAME, LAST_NAME
    FROM TABLE(CAST(V_TEST_COLLECTION AS TEST_ARRAY));
    Exception handler
    END;

  • Storage Location Wise Stock Value field and Table

    Hi ,
    Please let me know the storage location wise stock value field and table.
    Regards
    Suresh

    Hi Suresh,
    As per my understanding it is not possible in a single Table.
    If this is required for customized Z report then use this logic it may gives the correct information.
    In the combination of MARD and MBEW.. From MBEW you can get the value for each Base Unit of Measure then multiply that value with storage location stock.
    It may usefull to u..in the mean time i will try for some other option.
    Please revert if u have any query.
    Regards
    Durga Sana

Maybe you are looking for