Query based on main table and audit table

Hi,
I had created auditing on some table. Values might not change and if they changed, it should be stored in audit table.
I want to get the values in the table a on real time basis, like dimentions in datawarehouse.
Trying to write a query based on table a and aud_a to get point-in-time or values at anytime in the past.
Something like
SELECT *
FROM a (table_name)
WHERE effective_from >= $DATE_TO_QUERY
AND effective_to < $DATE_TO_QUERY
How to get this kind of query .
Please help. ( Table structure for table a and audit table aud_a and trigger aud_tg_a given below)
Giving code as follows.
main table a
create table a
( val1 number,
val2 number,
update_by varchar2(30),
date_updated date);
creare auidt table aud_a
create table aud_a
( "AUDIT_SEQ" NUMBER,
"AUDIT_TRAN_ID" NUMBER,
"AUDIT_PROG_ID" VARCHAR2(30 BYTE),
"AUDIT_TERMINAL" VARCHAR2(16 BYTE),
"AUDIT_REASON" VARCHAR2(30 BYTE),
"AUDIT_ACTION" CHAR(1 BYTE),
"AUDIT_ACTION_BY" VARCHAR2(20 BYTE),
"AUDIT_ACTION_DT" DATE,
val1 number,
val2 number,
updated_by varchar2(30),
date_updated date);
trigger on  table a to populate aud_a
CREATE OR REPLACE TRIGGER aud_tg_a AFTER
INSERT OR
DELETE OR
update on a
for each row
declare
v_time_now DATE;
v_terminal VARCHAR2(16);
v_tran_id NUMBER;
v_prog_id VARCHAR2(30);
V_reason VARCHAR2(30);
BEGIN
v_time_now := sysdate;
v_terminal := userenv('TERMINAL');
v_tran_id := 1;
v_prog_id := 'test';
v_reason := 'AUDIT';
IF inserting THEN
INSERT
INTO a
audit_seq,
AUDIT_tran_id,
AUDIT_prog_id,
AUDIT_reason,
AUDIT_terminal,
AUDIT_action_by,
AUDIT_action_dt,
AUDIT_action ,
val1,
val2,
updated_by,
date_updated
VALUES
s_audit_no.nextval,
v_tran_id,
v_prog_id,
v_reason,
v_terminal,
USER,
v_time_now,
'I' ,
:new.val1,
:new.val2,
:new.updated_by,
:new.date_updated
elsif deleting THEN
INSERT
INTO a
audit_seq,
AUDIT_tran_id,
AUDIT_prog_id,
AUDIT_reason,
AUDIT_terminal,
AUDIT_action_by,
AUDIT_action_dt,
AUDIT_action ,
us_agy_backed_id,
industry_subgroup,
comments,
updated_by,
date_updated
VALUES
s_audit_no.nextval,
v_tran_id,
v_prog_id,
v_reason,
v_terminal,
USER,
v_time_now,
'D' ,
:old.val1,
:old.val2,
:old.comments,
:old.updated_by,
:old.date_updated
elsif updating THEN
INSERT
INTO a
audit_seq,
AUDIT_tran_id,
AUDIT_prog_id,
AUDIT_reason,
AUDIT_terminal,
AUDIT_action_by,
AUDIT_action_dt,
AUDIT_action ,
us_agy_backed_id,
industry_subgroup,
comments,
updated_by,
date_updated
VALUES
s_audit_no.nextval,
v_tran_id,
v_prog_id,
v_reason,
v_terminal,
USER,
v_time_now,
'U' ,
:new.val1,
:new.val2,
:new.updated_by,
:new.date_updated
END IF;
END;
-------------------------

Hi hoek,
I am not able to use Oracle's audit functionality becuase I need to trap some changes in particular tables and then rebuild query if required.
Thanks for your suggestion though.
Regards,
Milind

Similar Messages

  • Performance of my query based on cube ? and ods?

    hi all,
    how to identify the performance of my query based on a cube nor ods. I have requirement which enables to do flat file extraction and the extraction is only once and the records are less too. I need to sort whether my query will be faster based upon cube nor on ods.
    Can anyone let me know how to measure the performance of my query based upon cube and ods and how to find out which one will be faster. bcos i need to explain them the entire process of going to load the data directly to ods and do reporting from there nor data loaded directly to cube and do reporting from cube.
    thanxs
    haritha

    Hi,
    ODS is 2 Dimensional  so avoid reporting on ODS,
    Cube is MultiDim, for analysis perpose we can go reporting on Cube only
    Records in ODS are Overwritten whereas in Cube records are Aggregated
    and can also do compression on Cube, which will increase the query performance and so data retrieval in cube is faster
    Thanks

  • How to layout a subreport based on main report and another subreport!

    Hello all,
    I got an issue how to capture the value at a run time and loop through group in Crystal Report. My project is complicated (main report and sub report using stored procedure with parameters as datasource and a lot of formula fields pass between main report to sub report, but I try to explain it as simple scenario as following:
    School 1
    Main report layout: (Group by Class)
    ===============================================
    Class A
    Sub report: ( This result from the stored procedure at run time and only having two rows when running stored procedure)
    Boy : 200                  
    Girl : 150
    ================================================
    Class B
    Sub report
    Boy: 50
    Girl: 100
    =================================================
    Class C
    Sub report 
    Boy: 259
    Girl: 80
    Please pay attention on the above layout what I am done on reports. My question is how to produce another report that used the previous values and having the following layout:
    School Summary
    Class  |  Boys  |  Girls  |  Student
    A        |   200    |  150   |     350
    B        |     50    |  100   |     150
    C        |   260    |    80   |     340
    Please give me some ideas how to solve above problem. I know that Crystal Report can use shared variables via formula fields. I am wondering whether or not how to capture the specific value at the run time? Is it posible to make it working in Crystal Report or I have to write a stored procedure in the backend??? Thanks in advance.

    Hi,
    When we look at your report lay out, for each class group you have boys and girls count which is coming from sub report.
    I think you placed the sub report in the Class group and linked with Class.  If that is the case you will get all information in the sub report itself.
    In the sub report, create a running total with shared variables and bring this value to main report to display at the end of the report.
    Eg: To get Class from sub report to main report :
    Whileprintingrecords;
    Shared stringvar class;
    class:=classchar(13);
    Place this formula in your sub report group and create one more formula
    Whileprintingrecords;
    Shared stringvar class;
    class;
    Place this formula in your main report report footer, this will bring your classes into main report. Follow the same process for boys and girls numbers.
    Thanks,
    Sastry

  • Powerpivot filter query based on another table's visible results?

    Excel 2010 x32 on Win 7 x64
    I have multiple tables coming into Powerpivot via SQL connection. They have some relationships pre-defined from the source.
    I need to reduce the amount of data I'm bringing in for my testing. One of the tables has great granularity, containing every event in the database. One field in this data is "Event Type".
    A separate table has a short list of the event types of interest.
    I'd like to filter the first table's data pull (SQL refresh) to only include the event types that are listed (and visible) in the second table, in addition to an existing date range filter that is already in place. Ultimately my goal is to widen
    the date range I can pull in before hitting Excel's memory limits, by eliminating the events I don't care about.
    Currently I'm using a SQL query to pull in the granular data;
    SELECT
      [Fact RawData].*
    FROM
      [Fact RawData]
    WHERE
      [Fact RawData].[Event Date] >= N'2014-06-01T00:00:00'  
    How would I adjust this to also say "only where [Fact RawData].[Event Type] IN {a column in a data pull that is already in powerpivot}"
    and how will that work under a "refresh all" scenario, where I would need the event table to update before this SQL is executed each time?
    Many thanks!

    If I understand correctly, these articles indicate that I can apply filters during
    data import, but I'm not clear how rows can be filtered during import based on
    another powerpivot table results. <o:p></o:p>
    From the first link:<o:p></o:p>
    "For data feeds, you can only change the columns that are imported. You can’t filter
    rows by values unless the source of the data feed is a report, and the report
    is parameterized."<o:p></o:p>
    So I guess the clarification of my original question is: How do I create a parameterized
    report, based on the data in another powerpivot table, and also ensure that the
    parameterized report is executed /after/ the source powerpivot table is
    refreshed so that the proper row filtering is applied?<o:p></o:p>
    Simplified example:<o:p></o:p>
    Table 1 = List of all physicians who have ever had a patient in a large hospital system.
    Filter when bringing results into powerpivot limits results to physicians from
    a target physician group, clinical specialty, or other filter based on reporting needs. <o:p></o:p>
    Table 2 = anonymized records for all patients, physician listed in each record. Filtered
    by time period when bringing into powerpivot.<o:p></o:p>
    I could bring back the whole patient table, but it is so large that Excel runs out of
    resources unless my time period is tiny. If I can limit the returned rows from
    Table 2 based on the current list of physicians shown in Table 1, then I will
    have a much smaller data set and can expand the time period filter to be more
    meaningful and make sure all the target records are brought back, without
    having to run multiple subsets of physicians or time, and still have to
    merge/remove duplicate records.<o:p></o:p>
    Thank you for any advice/URLs/etc.<o:p></o:p>

  • Snapshot query based on local table...

    Can anyone help me determine the proper syntax for a snapshot query that will pull records based on an previously sync'ed table? In other words I want to get a subset of data into table2 based on what was just sync'ed into table1, where the table1 snapshot contained a where clause with a :parameter.
    I thought the query would look something like this:
    select * from SERVER_SCHEMA.table2
    where
    column1 In (select column1 from &WTG_SCHEMA..table1);
    but I get an error when I try to sync. table1 has a weight of 10 and table2 has a weight of 20.
    TIA,
    Scott

    use the snapshot SQL you used for the first table in the snapshot SQL for the 2nd table.
    ie.
    1st TABLE snapshot
    select * from employees e where e.salary > 1
    2nd table snapshot
    select * from departments d where d.dept_id IN (
    select distinct e.dept_id from employees e where e.salary > 1)
    OR
    select d.*
    from departments d, employees e
    where d.dept_id = e.dept_id
    and e.salary > 1

  • There are more than 1000 users using query based on sales group and sales ?

    Hi all,
    for a project activities sales group need to have authorization object? but if they make the object authorized and moved to production, all the other 1000 users gets effected to this changes since they dont have access to it.
    Can anyone let me know how to handle this?
    let me explain in detail,
    there are different markets like india, china, hongkong , south east asia,
    for a particular project, only south east asia people needs to have authorization on sales group . but there are 500 users in south east asia, so once changes is moved it will affect the existing users too?
    So can anyone advise me how to handle this?
    Thanks

    HI Pooja,
    Create roles as per authorization you have to give to users.
    Create user groups - for different users depending on the countries where they belong - as you mentioned users belong to India,
    China,
    Hongknog,
    South east asia
    Eg, Create 1 role and give auth object for south east asia users. If you want to give particular authorization to 1 user only from among 500 users. create 2 roles with different authorizations and assign accordingly.
    Regards,
    Chitra

  • Query to display PO number and wip_entity_name

    hi
    i have this requirement as to add the following to an existing query (based on standard tables)
    SELECT
         MMT.INVENTORY_ITEM_ID          DT_INVENTORY_ITEM_ID
    ,     MMT.ORGANIZATION_ID          DT_ORGANIZATION_ID
    ,     MMT.TRANSACTION_DATE          DT_TRANSACTION_DATE
    ,     MMT.TRANSACTION_SOURCE_ID     DT_VOUCHER_NO
    ,     NVL(MMT.TRANSACTION_COST,0)     DT_TRANSACTION_COST
    ,     MTT.TRANSACTION_TYPE_NAME     DT_TRANSACTION_TYPE_NAME
    ,     NVL(MMT.NEW_COST,0)          DT_NEW_COST
    ,     DECODE((MMT.PRIMARY_QUANTITY/ABS(MMT.PRIMARY_QUANTITY)),1,MMT.PRIMARY_QUANTITY,0)
                             DT_QTY_IN
    ,     DECODE((MMT.PRIMARY_QUANTITY/ABS(MMT.PRIMARY_QUANTITY)),-1,MMT.PRIMARY_QUANTITY,0) * -1
                             DT_QTY_OUT
    FROM
         MTL_MATERIAL_TRANSACTIONS      MMT
    ,     MTL_TRANSACTION_TYPES           MTT
    WHERE
         MMT.PRIMARY_QUANTITY <> 0
    AND      MMT.TRANSACTION_TYPE_ID  NOT IN (101,120,80)
    /*     ABOVE CONDITION TO EXCLUDE OPENING BALANCE TRANSACTIONS
         AND AVERAGE COST UPDATE
    AND      MMT.SUBINVENTORY_CODE <> 'OM'
    AND     MMT.TRANSACTION_TYPE_ID = MTT.TRANSACTION_TYPE_ID
    AND     MMT.TRANSACTION_DATE BETWEEN :P_START_DATE AND :P_END_DATEwhenever the MTL_TRANSACTION_TYPES MTT.TRANSACTION_TYPE_NAME='PO Receipt' then display the PO number
    similarly WHEN MTL_TRANSACTION_TYPES MTT.TRANSACTION_TYPE_NAME= Wip Component Issue then display wip_entity name
    P_START_DATE AND :P_END_DATE i have put is 01-jan-2011 and 28-feb-2011
    once i add these conditions to the query ,the query displays less amount of rows ie 2014 rows whereas the first query used to return 2357 rows
    this is the modified query
    SELECT
    MMT.INVENTORY_ITEM_ID DT_INVENTORY_ITEM_ID,
    case when MTT.TRANSACTION_TYPE_NAME='WIP component issue' then wip.WIP_ENTITY_NAME
    when MTT.TRANSACTION_TYPE_NAME='WIP Component Return' then wip.WIP_ENTITY_NAME
    when MTT.TRANSACTION_TYPE_NAME='NP-Material Request' then wip.WIP_ENTITY_NAME
    end job_number
    , MMT.ORGANIZATION_ID DT_ORGANIZATION_ID
    , MMT.TRANSACTION_DATE DT_TRANSACTION_DATE
    , MMT.TRANSACTION_SOURCE_ID DT_VOUCHER_NO
    , NVL(MMT.TRANSACTION_COST,0) DT_TRANSACTION_COST
    , MTT.TRANSACTION_TYPE_NAME DT_TRANSACTION_TYPE_NAME
    , NVL(MMT.NEW_COST,0) DT_NEW_COST
    , DECODE((MMT.PRIMARY_QUANTITY/ABS(MMT.PRIMARY_QUANTITY)),1,MMT.PRIMARY_QUANTITY,0)
    DT_QTY_IN
    , DECODE((MMT.PRIMARY_QUANTITY/ABS(MMT.PRIMARY_QUANTITY)),-1,MMT.PRIMARY_QUANTITY,0) * -1
    DT_QTY_OUT
    FROM
    MTL_MATERIAL_TRANSACTIONS MMT
    , MTL_TRANSACTION_TYPES MTT
    ,WIP_ENTITIES WIP
    WHERE
    MMT.PRIMARY_QUANTITY <> 0
    AND MMT.TRANSACTION_TYPE_ID NOT IN (101,120,80)
    AND MMT.TRANSACTION_SOURCE_ID = WIP.WIP_ENTITY_ID(+)
    /* ABOVE CONDITION TO EXCLUDE OPENING BALANCE TRANSACTIONS
    AND AVERAGE COST UPDATE
    AND MMT.SUBINVENTORY_CODE <> 'OM'
    AND MMT.TRANSACTION_TYPE_ID = MTT.TRANSACTION_TYPE_ID
    AND MMT.TRANSACTION_DATE BETWEEN :P_START_DATE AND :P_END_DATE
    union
    SELECT
    MMT.INVENTORY_ITEM_ID DT_INVENTORY_ITEM_ID,
    case
    when MTT.TRANSACTION_TYPE_NAME='PO Receipt' then PHA.SEGMENT1 end job_number
    , MMT.ORGANIZATION_ID DT_ORGANIZATION_ID
    , MMT.TRANSACTION_DATE DT_TRANSACTION_DATE
    , MMT.TRANSACTION_SOURCE_ID DT_VOUCHER_NO
    , NVL(MMT.TRANSACTION_COST,0) DT_TRANSACTION_COST
    , MTT.TRANSACTION_TYPE_NAME DT_TRANSACTION_TYPE_NAME
    , NVL(MMT.NEW_COST,0) DT_NEW_COST
    , DECODE((MMT.PRIMARY_QUANTITY/ABS(MMT.PRIMARY_QUANTITY)),1,MMT.PRIMARY_QUANTITY,0)
    DT_QTY_IN
    , DECODE((MMT.PRIMARY_QUANTITY/ABS(MMT.PRIMARY_QUANTITY)),-1,MMT.PRIMARY_QUANTITY,0) * -1
    DT_QTY_OUT
    FROM
    MTL_MATERIAL_TRANSACTIONS MMT
    , MTL_TRANSACTION_TYPES MTT
    ,PO_HEADERS_ALL PHA
    ,PO_LINES_ALL POL
    WHERE
    MMT.PRIMARY_QUANTITY <> 0
    AND MMT.TRANSACTION_TYPE_ID NOT IN (101,120,80)
    AND MMT.TRANSACTION_SOURCE_ID=POL.PO_LINE_ID
    AND PHA.PO_HEADER_ID=POL.PO_LINE_ID
    /* ABOVE CONDITION TO EXCLUDE OPENING BALANCE TRANSACTIONS
    AND AVERAGE COST UPDATE
    AND MMT.SUBINVENTORY_CODE <> 'OM'
    AND MMT.TRANSACTION_TYPE_ID = MTT.TRANSACTION_TYPE_ID
    AND MMT.TRANSACTION_DATE BETWEEN :P_START_DATE AND :P_END_DATEkindly guide
    thanking in advance

    Hi,
    There could be nothing wrong in your Query, as all the WiP Assembly completions, WIP Negative Component Issue ,WIP Negative Component Return etc transaction types could be there in the Mat txns table for this period. Please check them

  • Documents in Query-based Taxonomy are not Autoindexing

    Hello experts!
    I am on EP6.14 with Trex 6.  I have a repository with a query-based taxonomy on it and is also part of a regular search index.  I have autoclassification on. 
    On each document I have a couple of properties that are used to match documents in queries in my taxonomy.  On the initial set-up and load, everything worked great.  Documents went right where they were supposed to.  But sometime after we released this to the business, the documents no longer automatically index.  If I change a document property or set up a new one, it finds the category but leaves them as "suggested."  You then have to go in and accept those suggestions to get the document to the right taxonomy.  In short, it acts as if autoclassification is off.  I have noticed that the Classification index does not submit a new document if I change a property but the search index does.  Not sure if that is applicable but I would think both should do that since it is a change.
    I did some research and found that synchronization issues or new folders could cause this problem but I have synchronized with no success.  Also, state management is not on for the repository.
    Is there any reason why autoclassification would stop on its own?  What can I do to get it back on?
    Thanks-
    J
    Many points given for helpful answers!

    Hi Frederick-
    Thanks for your response-
    We have a single CM DB repository.  There is a search only index at the root folder and classification index on a folder below the root.  Normally, we don't overlap but we wanted only the folders in the lower folder to be part of the taxonomy.  I suppose we could just rule out the other documents by not having them match any queries but this way it was guaranteed. 
    THe search and classifications are working (aren't throwing errors) but like I said, I would think that both would have to push the document through the queue.  But I'm only seeing the search index do that.   Now if I go and accept the recommended categories, then it goes into the queue for the Classification index.
    J

  • How to optimize query that returns data based on one matching and one missing field joining two tables

    Hi all,
    Here is what I am trying to do. I have 2 tables A and B. Both have a fiscal year and a school ID column. I want to return all data from table B where all school IDs match but fiscal year from A is not in B. I wrote 2 queries below but this took
    2 minutes each to process through 30,000 records in table B. Need to optmize this query.
    1) select 1 from table A inner join table B
    on A.SchoolID=B.SchoolID where A.Year not in (select distinct Year from table B)
    2) select distinct Year from Table A where School ID in (select distinct School ID from table B
    and Year not in (select distinct Year from table B)

    Faraz81,
    query execution time will depend not only on your data volume and structure but also on your system resources.
    You should post your execution plans and DDL to generate data structures so we can take a better look, but one think you could try right away is to store the results of the subquery in a table variable and use it instead.
    You'll also benefit from the creation of:
    1. An index for the B.SchoolID column.
    2. Statistics for the Year column in table B.
    You can also try to change the physical algorithm used to join A to B by using query hints (HASH, MERGE, LOOP) and see how they perform. For example:
    select 1 from table A inner HASH join table B
    on A.SchoolID=B.SchoolID where A.Year not in (select distinct Year from table B)
    As the query optimizer generally chooses the best plan, this might not be a good idea though, but then again, without further information its going to be hard to help you.

  • How to populate the main table and the lookup's at the same time

    Hi ,
       What I have with me is the XML files which contain the data from the material master and the excel sheet which talks about the mapping . it basically tells me which field of the main table maps to which field of which segment in the IDOC and also the name of the table and the field in the R3 system .
    I wish to use this info to populate the data in the material repository .
    - How can I populate the data in the lookup table at the same time when I am populating the main table ? I have only the XML's that correspond to the main table . I don't have seperate data for the lookup tables .
    - Can I use the standard maps available for import in the business content of material repository in MDM ?
    - If the answer to the second question is NO then I think i can create the maps and save them for future use .
    Regards
    Deepak Singh

    Hi, Deepak
    >>> - How can I populate the data in the lookup table at the same time when I am populating the main table ? I have only the XML's that correspond to the main table . I don't have seperate data for the lookup tables .
    I don't think you can populate both main table and all fields of lookup tables at the same time, i.e. using same map. You can consider 2 options to upload all information you have:
    1) If your XML file contains data you would like to upload to lookup tables, you can upload it to MDM lookup tables with several maps using same XML and choosing different sections of that XML corresponding to different MDM lookup tables.
    2) Also you can upload main table simultaneously with lookup table entries (using same map), but in this case new lookup table entries will only contain display field values that you mapped. To do this you should use 'Add' value mapping functionality for fields that you mapped to lookup tables.
    >>>- Can I use the standard maps available for import in the business content of material repository in MDM ?
    1) In case you have material master repository delivered by SAP and you use XML files which structure corresponds to SAP predelivered XSD schemas then you can use these maps undoubtedly.
    2) If your repository is based on SAP predelivered, but you changed it ,you should adjust these maps due to differences in repository structure and  XML files structure.
    3) If you created your repository from scratch you should consider option of making your own import maps.
    Regards,
    Vadim Kalabin

  • Linking Access tables, creating a query with using both Access and Oracle

    Hello,
    I am using 3.0.04.34 version Oracle Developer. I am supposed to create a script/procedure to use both Access tables and oracle tables together. There is an option in developer to copy the access tables into oracle. But it doesn't help me. Because when we updated the access tables
    the copied ones are not be updated. How can I created a linked access tables to oracle and create a query with using both access and oracle table together.
    I will appreciate if you guys help me. I look forward to hearing from you guys.
    Thanks,
    Pinar

    Pinar,
    to be able to query MS Access tables in Oracle you need an additional product, the Oracle Database Gateway for ODBC. It allows you to link any foreign database into an Oracle database using a suitable ODBC driver. You can then access the MS Access tables through a database link based on the Database Gateway for ODBC. This will also allow you to join local Oracle and remote MS Access tables from your Oracle database.
    There's a note on My Oracle Support which gives you more details:Document 233876.1 Options for Connecting to Foreign Data Stores and Non-Oracle Databases - For example - DB2, SQL*Server, Sybase, Informix, Teradata, MySQL
    And there's also a dedicated Forum: Heterogeneous Connectivity

  • Link between AP and GL main tables in 11i

    Hello,
    Can anyone please provide the link between AP and GL main tables.
    I need a join between the AP_Invoices_All, AP_INVOICE_DISTRIBUTIONS_ALL, GL_JE_LINES, GL_JE_HEADERS.
    Actually, I found a query that is as below. I want to confirm if that is correct. I need to confirm if the right joins are necessary and are placed correctly.
    Thank you
    select *
    from gl_je_headers gjh,
    gl_je_lines gjl,
    ap_invoices_all aia,
    po_vendors pv,
    ap_invoice_distributions_all aida
    where gjh.je_header_id = gjl.je_header_id
    and aida.invoice_id(+) = gjl.reference_2
    and aida.distribution_line_number(+) = gjl.reference_3
    and aida.invoice_id = aia.invoice_id(+)
    and gjh.je_source = 'Payables'
    --and gjh.je_category = 'Purchase Invoices'                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hello,
    Any help.
    I have also found out that, in 11i, the Reference_2 column in the GL_JE_Lines table has the invoice_ID in it.
    So, does that mean that we can join the GL tables and the AP Invoice tables using this condition. I dont know if that is right. Please help
    select * from
    ap_invoices_all aia, gl_je_lines gjl
    where aia.invoice_id = gjl.reference_2

  • View link between view based on entity(table) and view based on stored proc

    I've created a view based on a stored procedure. I need to link this view to a view based on an entity which is based on a table.
    I can create the view without issue, but when I attempt to run the application module that contains the relationship I get this error:
    (oracle.jbo.InvalidOperException) JBO-26016: Cannot set user query to view "SalesentityModuleApiView2" because it is a destination in a view link
    One thing that may be notable about this is that this view started out based on a database view. I later overrode the select related methods using the example here:
    http://download-east.oracle.com/docs/html/B25947_01/bcadvvo008.htm
    Any ideas? I will gladly post some code if someone will let me know what might help diagnose this.

    Hi,
    I solved my problem with adding transient field, and changing the value of it (true | false) on set method of field that can be changed so I can get which row was updated. What exactly do you mean when you say not updateable, I'm using a vo with no entity, and I can add, remove rows, in fact I think it's a good solution because I don't want to write to database immediately.

  • Can anybody tell me about SD Functionality and the main tables used in SD

    Hi,
    Can anybody tell me about SD Functionality and the main tables used in SD.
    Thanks,
    bsv.

    Hi
    SD FLOW
    SD Flow Cycle:
    INQUIRY ( VA11)
    |
    QUOTATION (VA21)
    |
    PURCHASE ORDER (ME21)
    |
    ORDER CONFIRMATION (VA01)
    |
    PICKING LIST – (VL36)
    |
    PACKING LIST - (VL02, VL01)
    |
    SHIPPING – (VT01)
    |
    INVOICE – (VF21, VF01)
    |
    AR
    Enquiry - Customer enquires about the Products services that were sold by a company - VA11
    Quotation - Company Gives a Quotation for the products and Services to a Customer
    Sales Order - Customer gives a Purchase order to the company agaionst which a Sales order will be raised to Customer in SAP.
    VBAK: Sales Document(Header Data) (VBELN)
    VBAP: Sales Document(Item Data) (VBELN,POSNR,MATNR,ARKTX,CHARG)
          Enquiry, Quotation, Sales Order are differentiated based on Doc.
          Type(VBTYP field) in VBAK,VBAP Tables( for Enquiry VBTYP = A,
          for Quotation 'B' & for Order it is 'C'.)
    Delivery(Picking, Packing, Post Goods Issue and Shipment)->
    Company sends the material after picking it from Godown and Packing it in a Handling Unit(box) and Issues the goods
    LIKP: Delivery Table (Header Data)(VBELN,LFART,KUNNR,WADAT,INCO1)
    LIPS: Delivery Table (Item Data)(VBELN,POSNR,WERKS,LGORT,MATNR,VGBEL)
          (LIPS-VGBEL = VBAK-VBELN, LIPS-VGPOS = VBAP-POSNR)
    Billing - Also company bills to the customer for those deliveries
    And in FI against this billing  Accounting doc is created.
    VBRK: Billing Table(Header Data)(VBELN,FKART,BELNR)
    VBRP: Billing Table(Item Data)(VBELN,POSNR,FKIMG,NETWR,VGBEL,VGPOS)
          (VBRP-AUBEL = VBAK-VBELN, VBRP-VGBEL = LIKP-VBELN)
          Apart from these tables there are lot of other tables which starts with
          ‘V’, but we use the following tables frequently.
    other tables and imp fields
    VBUK: All Sales Documents status & Admn. Data(Header)(VBELN,VBTYP)
          VBTYP= ‘C’(Sales Order) VBTYP=’J’(Delivery) VBTYP=’M’(Invoice) 
    VBUP: Sales Documents status & Admn. Data(Item)(VBELN,POSNR)
    VBEP: Sales Doc. Schedule Lines Data(VBELN,POSNR,EDATU,WMENG)
    VBKD: To get sales related Business data like Payment terms etc.(VBELN,ZTERM)
    VBFA: sales document flow data(VBELV,VBELN,POSNV,VBTYP)
    VBPA: Partner functions Data(VBELN,PARVW,KUNNR,LIFNR)
    VEDA: Contract Data(VBELN,VPOSN)
    VEDAPO: Contract Data(VBELN,VPOSN)
    KONA:  Rebate Agreements (KNUMA,VKORG,VTWEG,SPART)
    VBRL:  SD Document: Invoice List(VBELN,POSNR,VBELN_VF,NETWR,KUNAG)
    VKDFS: SD Index: Billing Indicator(FKTYP,VBELN,FKART,VKORG)
    VBSK:  Collective Processing for a Sales Document Header(SAMMG,SMART)
    VBSS:  Collective Processing: Sales Documents(SAMMG,VBELN,SORTF)
    VRKPA: Sales Index: Bills by Partner Functions(VBELN,BELNR,KUNDE,PARVW)
    VRPMA: SD Index: Billing Items per Material(MATNR,VBELN,BELNR,KUNNR)
    TVLKT: Delivery Type: Texts(LFART,VTEXT)
    KNA1: Customer Master-General(KUNNR,NAME1,LAND1)
    KNB1: Customer Master(Company Code)(KUNNR,BUKRS,PERNR)
    KNC1: Customer Master Data (Transaction Figures)(KUNNR,BUKRS,GJAHR)
    KNVK: Customer Master Contact Partner(PARNR,KUNNR,NAME1)
    KNVV: Customer Master sales data(KUNNR,VKORG,VTWEG,KDGRP)
    KNBK: Customer Bank Details(KUNNR,BANKS,BANKL,BANKN)
    KNVH: Customer Hierarchy (HITYP,KUNNR,VKORG,VTWEG,SPART)
    KNVP: Customer Master Partner Functions(KUNNR,PARVW,KUNN2)
    KNVS: Customer Shipment data(KUNNR,VSTEL,TRANS)
    KNVI: Customer Tax data(KUNNR,ALAND,TATYP)
    LFA1: Vendor Master-General (LIFNR,NAME1,ORT01)
    LFB1: Vendor Master(Company Code)(LIFNR,BUKRS,PERNR)
    LFC1: Vendor Master (Transaction Figures)(LIFNR,BUKRS,GJAHR)
    MARA: Material Master-General (MATNR,MTART,MATKL)
    MARC: Material Master-Plant data(MATNR,WERKS,EKGRP)
    MARD: Material Master- St.Location Data(MATNR,WERKS,LGORT,LABST)
    EBEW:  Sales Order Stock Valuation(MATNR,VBELN,BWKEY,BWTAR)
    TVKO:  Sales Organizations(VKORG)
    TVTW:  Distribution Channel(VTWEG)
    TSPA:  Divisions(SPART)
    TVKOV: Distribution Channels for S.Orgn(VKORG,VTWEG)
    TVKOS: Divisions for S.Orgn(VKORG,SPART)
    TVTA:  Sales Areas(VKORG,VTWEG,SPART)
    TVBUR: Sales Offices(VKBUR,ADRNR)
    TVKBT: Sales Office Texts(VKBUR,SPRAS,BEZEI)
    TVKBZ: Sales Office Assign.to Sales Area(VKORG,VTWEG,VKBUR)
    TVKGR: Sales Group(VKGRP)
    TVGRT: Sales Group Texts(VKGRP,SPRAS,BEZEI)
    TVBVK: Sales Group to Sales office(VKBUR,VKGRP)
    TVKWZ: Plants Assign.to S.Orgn(WERKS,VKORG)
    T171T: Sales District Texts(BZIRK,BZTXT,SPRAS)
    TVLA:  Loading Points(LSTEL)
    TVST:  Shipping Points (VSTEL)
    TVSWZ: Shipping Point to Plant(VSTEL,WERKS)
    TVPT:  Item Categories (PSTYV)
    TINC:  Customer Incoterms(INCO1)
    T077D: Customer Account Group (KTOKD)
    T001W: Plants (WERKS)
    T001L: Storage Locations (LGORT)
    T499S: Locations(WERKS,STAND,KTEXT)
    TWLAD: To get address of Storage Location and Plant(LGORT,ADRNR)
    TVAK:  Sales Document (Order) Types (AUART)
    TVAU:  Sales Documents: Order Reasons (AUGRU)
    TVFK:  Billing Document Types (FKART)
    TVLK:  Delivery Types(LFART)
    TVSB:  Shipping Conditions (VSBED)
    TTDS:  Transportation Points(TPLST)
    TVKT:  Account Assignment Groups (KTGRD)
    KONV:  Condition Types pricing)(KNUMV,KSCHL,KWETR)
    ADRC:  To get Addresses of Partners(ADDRNUMBER,NAME1)
    VBBE:  Sales Requirements: Individual records(VBELN,POSNR,MATNR)
    VBBS:  Sales Requirement totals Record(MATNR,WERKS,LGORT,CHARG)
    VBKA:  Sales Activities Data(VBELN,KTAAR)
    VBPV:  Sales Document Product Proposal(VTWEG,MATNR,KUNNR,CHARG)
    T682:  Access Sequences (KOZGF)
    T682T: Access Sequence Texts (KOZGF,VTXTM)
    T683:  Pricing Procedures (KALSM)
    T683T: Pricing Procedures Texts(KALSM,KAPPL,SPRAS,VTEXT)
    T685:  Pricing Condition Types (KSCHL)
    T685T: Condition Type Texts(KSCHL,SPRAS,KAPPL,VTEXT)
    KONH:  Conditions (Header)(KNUMH,KAPPL,KSCHL)
    KONP:  Conditions (Item)(KNUMH,KOPOS,KAPPL,KSCHL)
    KONV:  Conditions (Transaction Data)(KNUMV,KSCHL,KBERT,KWERT)
    KOND:  Conditions (KNUMD,ZUSKO,KSCHL)
    for sd go through the links
    http://www.sapgenie.com/abap/tables_sd.htm
    Please check this SD online documents.
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/CAARCSD/CAARCSD.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/MYSAP/SR_SD.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCBMTWFMSD/BCBMTWFMSD.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/LOLISPLN/LOLISPLN.pdf
    Also please check this SD links as well.
    http://help.sap.com/saphelp_47x200/helpdata/en/92/df293581dc1f79e10000009b38f889/frameset.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/dd/55f33e545a11d1a7020000e829fd11/frameset.htm
    http://www.sap-basis-abap.com/sapsd.htm
    http://www.sap-img.com/sap-sd.htm
    http://www.sapgenie.com/abap/tables_sd.htm
    http://searchsap.techtarget.com/featuredTopic/0,290042,sid21_gci961718,00.html
    http://www.sapbrain.com/TUTORIALS/FUNCTIONAL/SD_tutorial.html
    All help ebooks are in PDF format here
    http://www.easymarketplace.de/online-pdfs.php
    Reward points if useful
    Regards
    Anji

  • Table Link between Main Invoice and TDS standard invoice generated

    HI All,
    I wanted to know the table linking the Main Invoice and corresponding the TDS standard invoices generated who payment is done to the authority.
    I want to get the list of all the main invoices who’s TDS generated invoice payment is done. Main voices I fetch from ap_invoices_v
    Please suggest
    Regards,
    SK

    Hello,
    The following query (11.5.10.2) will retrieve the invoices who originated the Withholding Tax invoices in a certain period. Please change parameters accordingly.
    SELECT
    supplier.vendor_name Supplier_Name ,
    supplier.segment1 Supplier_Number ,
    supplier.num_1099 Supplier_NIF ,
    supplier_site.vendor_site_code Site_Code ,
    i.invoice_date Invoice_Date ,
    i.invoice_currency_code Invoice_Currency_Code ,
    i.exchange_rate Invoice_Exchange_Rate ,
    c.name Invoice_Currency_Name ,
    i.invoice_num Invoice_Number ,
    i.invoice_amount Invoice_Amount ,
    i.base_amount Invoice_Base_Amount ,
    i.description Invoice_Description ,
    d.accrual_posted_flag Accrual_Posted_Flag ,
    d.accounting_date Accounting_Date ,
    nvl(d.awt_gross_amount , i . invoice_amount ) Amount_Subject_To_Tax ,
    d.amount Tax_Amount ,
    d.base_amount Tax_Base_Amount ,
    d.description Line_Description ,
    d.awt_flag AWT_Flag ,
    n.name Dist_Tax_Name ,
    t.tax_rate Dist_Tax_Rate ,
    t.rate_type Dist_Rate_Type ,
    t.certificate_number Dist_Certificate_Number ,
    t.certificate_type Dist_Certificate_Type ,
    t.start_date Dist_Cert_Start_Date ,
    t.end_date Dist_Cert_End_Date ,
    t.priority Dist_Cert_Priority
    FROM
    ap_invoice_distributions_all d ,
    ap_invoices_all i ,
    fnd_currencies_vl c ,
    po_vendors supplier ,
    po_vendor_sites_all supplier_site ,
    ap_awt_tax_rates_all t ,
    ap_tax_codes_all n
    WHERE d.invoice_id = i.invoice_id
    AND i.vendor_id = supplier_site.vendor_id
    AND i.vendor_site_id = supplier_site.vendor_site_id
    AND i.vendor_id = supplier.vendor_id
    AND i.invoice_currency_code = c.currency_code
    AND d.awt_tax_rate_id = t.tax_rate_id (+)
    AND d.tax_code_id = n.tax_id
    AND d.line_type_lookup_code = 'AWT'
    AND d.set_of_books_id = <your set_of_books_id>
    AND d.accounting_date >= '1-aug-09'
    AND d.accounting_date <= '1-aug-09'
    Hope this helps,
    Octavio

Maybe you are looking for

  • Unable to view FF reports in the front-end of FF 5.2 tool

    After installing the FF tool on Netweaver, I am unable to see the reports from the back-end (R/3) system. The R/3 system has the FF 5.2 RTA installed and a user Id defined to connect to front-end of FF 5.2. I configured the FF 5.2 connector, but stil

  • Solaris 10/6/06 Routing

    Hello, I am trying to set up my Solaris server as a router and am having trouble. I am brand new to solaris but have had some experience with windoze. Basically my server has two nics sfe0 and sfe1 sfe0 is connected through a switch to my private net

  • Why can´t I install Itunes 10.5 for windows vista?

    "Error durante la instalación del ensamblaje" "policy.8.0.Microsoft VC80.CRT.type="win32-policy".vers ion="8.0.50727.6195".publicKeyToken="1fc8b3b9a1e18e3b".processorArchitecture="x 86"".Consulte Ayuda y soporte técnico para obtener más información.

  • Sender HTTP adapter

    Hi, I have been working with SOAP sender / receiver adapters.. Now I am working on a new scenario where I am developing scenarios from and to HTTP servers. 1. I use http://host:port/XISOAPAdapter/MessageServlet?channel=party:service:channel URL to ca

  • Installing TFS 2013 and recieved TF255048 error

    Add the port to both the SQL Server and the Server with Reporting Services Analysis. Not sure about the SQL Server Analysis Services Instance name. The forums referring to the fixes for 2010 are not working.