Materialized view to ensure data integrity over multiple tables
Hello,
I have a problem and I am not able to solve it. Partially, It is because of my poor SQL skills. I have three tables and I am using one sequence to enter data into them.
What I am tying to do is to create a materialized view ( complete or fast, whichever) with refresh on commit option to check that each table contains unique data in comparison to other.
I am posting code so you can get along:
CREATE TABLE table_1 (
ID NUMBER PRIMARY KEY
CREATE TABLE table_2 (
ID NUMBER PRIMARY KEY
CREATE TABLE table_3 (
ID NUMBER PRIMARY KEY
INSERT INTO table_1 VALUES (1);
INSERT INTO table_1 VALUES (2);
INSERT INTO table_2 VALUES (3);
INSERT INTO table_2 VALUES (4);
INSERT INTO table_3 VALUES (5);
INSERT INTO table_3 VALUES (6); I want to write create a materialized view that will give me output only in case that there are same values in two different tables. I got this far.
CREATE MATERIALIZED view mv_test
REFRESH COMPLETE ON COMMIT
AS
SELECT count(1) ROW_COUNT
FROM dual
WHERE EXISTS (
SELECT a.id
FROM table_1 a
WHERE a.id IN(
SELECT b.id
FROM table_2 b))
OR EXISTS (
SELECT a.id
FROM table_1 a
WHERE a.id IN
(SELECT c.id
FROM table_3 c))
OR EXISTS (
SELECT b.id
FROM table_2 b
WHERE b.id IN
(SELECT c.id
FROM table_3 c));
ALTER MATERIALIZED VIEW mv_test
ADD CONSTRAINT cs_mv_test
CHECK (row_count = 0) DEFERRABLE; This sql statement itself returns no rows if my logic is correct. And in case there were some duplicate rows in two different table, it would return 1 and constraint would throw an error.
However, I cannot create this with ON COMMIT option. When I try to compile I get:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view I went through documentation, tried creating mat_view logs etc.
I know that one of the mistakes is that I am referencing dual table and I am not sure if I can use EXISTS.
Unfortunately, my SQL wisdom ends here. I need help rewriting the sql, so it would work in materialized view with refresh on commit option. Please, help!!!
I know that since I am using a sequence there is little chance that same value will get into two different tables, but I would like to perform somekind of check.
Thank you in advance.
>
I know that since I am using a sequence there is little chance that same value will get into two different tables, but I would like to perform somekind of check.If you are certain that you control all the inputs to the table and you are definitely using one sequence to insert into all three tables then there is physically no possible way you will get duplicate values across tables.
Writing something to check if this is the case would almost be like writing something to verify that 1+1 really does equal 2 in 100% of cases.
if you must, however. consider something similar to the following which may be more performant:
select *
from table_1 t1
full outer join table_2 t2 on (t1.id = t2.id)
full outer join table_3 t3 on (t1.id = t2.id
or
t2.id = t3.id)
where t1.id+t2.id+t3.id not in (t1.id,t2.id,t3.id);
Similar Messages
-
AWM does not provide Materialized Views as potential data sources.
Hi
I would like to map OLAP dimensions and cubes directly to materialized views. The Oracle AWM tool, however, only displays Tables, Views and Synonyms as mapping objects and I have to indirectly map to normal views created over the materialized views.
Why does Oracle enforce this limitation?
Thanks
Kind Regards
GregGreg,
Not sure why AWM enforces this limitation, certainly Warehouse Builder does not enforce this rule (at least not to my knowledge). You could raise a bug/enhancement request with Support. However, my recommendation is to never to map directly to a source object, such as a fact table or materialized view, in AWM. This can create all sorts of problems as you move your AWM model across environments. I always suggest to my customers and consultants they create a view over their source objects and use the view within the mapping editor. It will give you more control over the flow of data into your cube/dimension since it allows you to define filters on the view.
Hope this helps
Keith Laker
Oracle EMEA Consulting
BI Blog: http://oraclebi.blogspot.com/
DM Blog: http://oracledmt.blogspot.com/
BI on Oracle: http://www.oracle.com/bi/
BI on OTN: http://www.oracle.com/technology/products/bi/
BI Samples: http://www.oracle.com/technology/products/bi/samples/ -
How Data Integrator supports multiple character sets in a single ETL transaction
<p>When using Data Integrator (DI) to process a mix of multi-byte and single-byte data, it is recommended that you use UTF-8 for the job server codepage. You can, however, use different codepages for the individual datastores.</p><p>Imagine this situation : Great Big Company Inc. wants to create a global customer database. To do this, Great Big Company Inc. must read from a database of US customers, and a database of Korean customers. Great Big Company Inc. then wants to load both sets of customers into a single database.</p><p>Can DI manage these requirements? Of course. The codepage is the thing.</p>
I've never seen this used the way you are using it. In my experience the only way to do this would be to execute a single SQL statement that returns multiple result sets - you are trying to append two SQL statements.
You could define an in-line procedure wrapping your two select statements, or you could define a stored procedure to do the same thing. Then (either way) use CallableStatement to execute the call to the procedure. -
Oracle Materialized view with xmltype data type
this the table having in db1 i need to create materialized view db2 for this table i have followed below steps..
create table WORKSHEETMASTER
METHODID NUMBER(10),
WORKSHEETCODE VARCHAR2(50 BYTE) not null,
WORKSHEET SYS.XMLTYPE);
create materialized view log on db1.WORKSHEETMASTER;
db2
CREATE MATERIALIZED VIEW WORKSHEETMASTER
REFRESH FAST ON DEMAND
AS
SELECT METHODID,
WORKSHEETCODE,
worksheet FROM db1.WORKSHEETMASTER@DBLINK;
when i was create materialized view above script in db2 iam getting error
ORA-22992:cannot use LOB locators selected from remote tables
like this when remove the worksheet column created succesfully may know how achieve this problem
my database version 11g iam searched some sceniour not full filled
need for help
thanksthis the table having with in DB1
create table WORKSHEETMASTER
METHODID NUMBER(10),
WORKSHEETCODE VARCHAR2(50 BYTE) not null,
WORKSHEET SYS.XMLTYPE,
WORKSHEETID NUMBER primary key,
CREATEDDATE DATE,
CREATEDBY VARCHAR2(50 BYTE),
WORKSHEETNAME VARCHAR2(50 BYTE),
UPDATEDDATE DATE,
UPDATEDBY VARCHAR2(50 BYTE),
NOOFROWS NUMBER(3),
NOOFCOLUMNS NUMBER(3),
WORKSHEETTYPE CHAR(1 BYTE),
SUBSTRATEUSED VARCHAR2(50 BYTE),
STATUS NUMBER(1),
APPROVEDBY VARCHAR2(50 BYTE),
APPROVED CHAR(1 BYTE) default 'N',
APPROVALREMARKS VARCHAR2(100 BYTE),
LNG_WORKSHEETNAME VARCHAR2(50)
iam trying to create materailzed view in db2
create materialized view WORKSHEETMASTER
refresh fast on demand
as
SELECT METHODID,
WORKSHEETCODE,
WORKSHEETID,
worksheet,
CREATEDDATE,
CREATEDBY,
WORKSHEETNAME,
UPDATEDDATE,
UPDATEDBY,
NOOFROWS,
NOOFCOLUMNS ,
WORKSHEETTYPE,
SUBSTRATEUSED,
STATUS,
APPROVEDBY,
APPROVED,
APPROVALREMARKS,
LNG_WORKSHEETNAME FROM db1.WORKSHEETMASTER@DBLINK; --remote database
iam creating above scriprt in db2 getting error this my total script -
Materialized view with xmltype data type
Hello to all,
I have a challenge with my 10g r2 database. I need to make a materialized view from a table in this format:
Name Null? Type
RECID NOT NULL VARCHAR2(200)
XMLRECORD XMLTYPE
,my problem is that (as i read from docs) i cant make the view refreshable on commit and also i cant refresh the mv in fast mode ( i dont need to refresh mv complete - takes too long).
Do you have a hint for this?
Thank you in advance.
Danielhi,
I cant upgrade to 11g.Also i cant change the table structure.
Here is a sample of xmltype field content:
RECID XMLRECORD
D00009999 <row id='D100009999'><c2>10000</c2><c3>xxxxx</c3><c5>xxxx..
And i need to extract in the mv the data from c2, c3 and so on.
Still waiting for a hint.
Thank you. -
Refresh fails on materialized view with CLOB data type
Hi,
Hope somebody can help me with this issue.
Some materialized views get status broken on refreshment, but only sometime. When I try to refresh them manually I get following message: "ORA-01400: cannot insert NULL into...". But I know for sure that there are no NULL values in the master table, MV and master tables are declared in the same way and all columns in master tables are NOT NULL columns. Another ting is that this error I get only on columns with data type CLOB.
Please, help!
/Juliahi,
I cant upgrade to 11g.Also i cant change the table structure.
Here is a sample of xmltype field content:
RECID XMLRECORD
D00009999 <row id='D100009999'><c2>10000</c2><c3>xxxxx</c3><c5>xxxx..
And i need to extract in the mv the data from c2, c3 and so on.
Still waiting for a hint.
Thank you. -
[help] materialized view lost + duplicated data on remote - Fast Refresh
I created materialized view by "FAST REFRESH":
MASTER:
create SNAPSHOT log on XXX with rowid;
REMOTE:
create snapshot XXX
REFRESH fast START WITH SYSDATE NEXT SYSDATE + 1/86400 with rowid
as select * from XXX@DB_LINK;
-> I had many transactions insert/update/delete on MASTER.
Please help me any idea to protect data and fast replicate data.....=>
My Problem:
1. data duplicated on Remote server.
2. data lost on Remote server.I need to know anybody use materialized view log and found the problem like me.
I have changed script:
MASTER:
create SNAPSHOT log on XXX with primary key including new values;
REMOTE:
create snapshot XXX
REFRESH fast START WITH SYSDATE NEXT SYSDATE + 1/86400 with primary key
as select * from XXX@DB_LINK;
I'm sure new script to resolve duplicate, But not sure about data lose.
If you have some idea, Please inform me............... -
Need a Data Integration for multiple ERP systems
We are doing some research into a data integration layer to our BW 7.3/BOBJ 4.0 from multiple ERPs. Of course we are looking at Data Services and Information Steward in the BOBJ suite but just looking for anybody's recommendations on the topic.
What technology platform do you use to do the extract, transform, load processes from multiple backend sources into BW? Any you would advise us to avoid?Hello Edward,
The answer depends on multiple factors.
Some pointers:
Volume and growth of db in scope planning (Federation vs replication )
If data federation is where you want to move data services / BO tools will be ideal
If your data is coming from multiple ERP systems you can utilize there delta queue to load data via data services (in case of SAP)
Use native DB connect/UD connect functionalities in BW as with BW 7.3 its delta capable.
Moving to BW 7.4 you will have SDA to solve that problem of integration to Non - SAP data into your EDW landscape
These are pointers but I would say talk to enterprise architects and look into the foresight of wheyare you want to move your EDW platform.
Cheers!
Suyash -
Data Federator: Unioning Multiple Tables into single view possible?
Hi,
I have three different databases with tables containing a portion of the same kind of data and I want to union the three different tables together in Data Federator to present a single logical view that has the complete set of data. Is this possible in Data Federator? How would I go about doing that?
Note: I do not have keys to join the tables together on since it's not simply extending the data in one table with additional data in another table and doing an inner join on a unique key. Instead, for example, there's a customer table for Finance, a customer table for Operations and a customer table for Sales and they all contain the same columns (with maybe slightly different names) and same type of data. I want to effectively union Finance, Operations and Sales together in a federated/virtualized view so applications can just query from that view to get all the customers from the three different databases.
I have been unable to do this so far since Data Federator requires each table in a mapping to have a relationship with the other tables in the mapping.
Thanks for the help.
KerbyI figured out the original question using one mapping for each table but have a new question now.
Is it possible for an application to use the combined view from Data Federator to write back into the databases?
e.g. have an application use the target table in Data Federator and view the results, and based on that update the data in the underlying database that provides the data for the target table?
Thanks for the help.
Kerby -
How to get Materialized View to ignore unused columns in source table
When updating a column in a source table, records are generated in the corresponding materialized view log table. This happens even if the column being updated is not used in any MV that references the source table. That could be OK, so long as those updates are ignored. However they are not ignored, so when the MV is fast refreshed, I find it can take over a minute, even though no changes are required or made. Is there some way of configuring the materialized view log such that the materialized view refresh ignores these updates ?
So for examle if I have table TEST:
CREATE table test (
d_id NUMBER(10) PRIMARY KEY,
d_name VARCHAR2(100),
d_desc VARCHAR2(256)
This has an MV log MLOG$_TEST:
CREATE MATERIALIZED VIEW LOG ON TEST with rowid, sequence, primary key;
CREATE MATERIALIZED VIEW test_mv
refresh fast on demand
as
select d_id, d_name
from test;
INSERT 200,000 records
exec dbms_mview.refresh('TEST_MV','f');
update test set d_desc = upper(d_desc) ;
exec dbms_mview.refresh('TEST_MV','f'); -- This takes 37 seconds, yet no changes are required.
Oracle 10g/11gI would love to hear a positive answer to this question - I have the exact same issue :-)
In the "old" days (version 8 I think it was) populating the materialized view logs was done by Oracle auto-creating triggers on the base table. A "trick" could then make that trigger become "FOR UPDATE OF <used_column_list>". Now-a-days it has been internalized so such "triggers" are not visible and modifiable by us mere mortals.
I have not found a way to explicitly tell Oracle "only populate MV log for updates of these columns." I think the underlying reason is that the MV log potentially could be used for several different materialized views at possibly several different target databases. So to be safe that the MV log can be used for any MV created in the future - Oracle always populates MV log at any update (I think.)
One way around the problem is to migrate to STREAMS replication rather than materialized views - but it seems to me like swatting a fly with a bowling ball...
One thing to be aware of: Once the MV log has been "bloated" with a lot of unneccessary logging, you may perhaps see that all your FAST REFRESHes afterwards becomes slow - even after the one that checked all the 200000 unneccessary updates. We have seen that it can happen that Oracle decides on full table scanning the MV log when it does a fast refresh - which usually makes sense. But after a "bloat" has happened, the high water mark of the MV log is now unnaturally high, which can make the full table scan slow by scanning a lot of empty blocks.
We have a nightly job that checks each MV log if it is empty. If it is empty, it locks the MV log and locks the base table, checks for emptiness again, and truncates the MV log if it is still empty, before finally unlocking the tables. That way if an update during the day has happened to bloat the MV log, all the empty space in the MV log will be reclaimed at night.
But I hope someone can answer both you and me with a better solution ;-) -
Is selecting from a view more efficient than selecting from multiple tables
Hi heres the problem
Lets say i created a view from 2 tables (person and info). both have a ID column
create view table_view (age,name,status,id) as
select a.age, a.name, b.status, b.id
from person a, info.b
where a.id=b.idif i want to select a given range of values from these 2 tables which of the following queries would be more effective.
select a.age, a.name, b.status, b.id
from person a, info.b
where a.id=b.id
and a.id <1000
select age, name, status, id
from table_view
where id <1000Bear in mind that this concept of views storing the SQL text is something relative to Oracle databases and not necessarily other RDBMS products. For example, Ingres databases create "views" as tables of data on the database and therefore there is a difference between selecting from the view and selecting from the base tables.
Oracle also has "materialized views" which differ from normal "views" because they are actually created, effectively, as tables of data and will not use the indexes of the base tables.
In Oracle, you cannot create indexes against "views" but you can create indexes against "materialized views". -
Materialized Views appear twice - under MV and under tables
Hi,
I've encountered a wierd issue.
It seems that every materialized view is also appearing under the tables tab.
For example sh.cal_month_sales_mv. Another example, run this as sh:
CREATE MATERIALIZED VIEW SH.OFIR_MV
REFRESH FORCE ON DEMAND
ENABLE QUERY REWRITE
AS SELECT sum(s.amount_sold) AS dollars
FROM sales s;
You'll see a table OFIR_MV under the table tab. If you try to drop it(right-click-> table-> drop), you get ora-12083 must use DROP MATERIALIZED VIEW to drop "SH"."OFIR_MV"
Also, SQL tab of the "table" has the folowing remark
-- DBMS_METADATA was unable to use TABLE_EXPORT to generate sql. Now trying to use TABLE
I guess this is the source of the issue:
select object_name,object_type from user_objects where object_name = 'OFIR_MV';
OBJECT_NAME OBJECT_TYPE
OFIR_MV TABLE
OFIR_MV MATERIALIZED VIEW
2 rows selected
So, creating a MV also adds a table to user_objects/tables. (Huh? that suprised me)
Could SQL Dev filter these fake tables or at least have a different icon/color for them?
Thanks.
Ofir
BTW - Win XP, 10.2.0.3EE, SQL Dev 1.1.1.25.14e.g Re: Materialized Views, indexes and Raptor Eeerrr... no.
That user wanted to be able to access indexes, etc., which he thought could be easily accomplished by adding the MV to the tables node.
To what the response was: the options on the MV will be improved (which it has), NOT bringing it in the tables node. As Ofir also pointed out, there's a reason for it not belonging there: the operations performed on the objects of the table node expect tables, not MVs.
So, between fixing the tables node for accepting MVs and it's operations, and just removing the MVs from the node, I'd go for the second, faster, more correct solution.
K. -
SQL Loader: Multiple data files to Multiple Tables
How do you create one control file that refrences multiple data file and each file loads data in a different table.
Eg.
DataFile1 --> Table 1
DataFile2 --> Table 2
Contents and Structure of both data files are different. Data file is comma seperated.
Below example is for 1 data file to 1 table. Need to modify this or create a wrapper that would call multiple control files.
OPTIONS (SKIP=1)
LOAD DATA
INFILE DataFile1
BADFILE DataFile1_bad.txt'
DISCARDFILE DataFile1_dsc.txt'
REPLACE
INTO TABLE Table1
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
Col1,
Col2,
Col3,
create_dttm sysdate,
MySeq "myseq.nextval"
Welcome any other suggestions.I was thinking if there is a way to indicate what file goes with what table (structure) in one control file.
Example ( This does not work but wondering if something similar is allowed..)
OPTIONS (SKIP=1)
LOAD DATA
INFILE DataFile1
BADFILE DataFile1_bad.txt'
DISCARDFILE DataFile1_dsc.txt'
REPLACE
INTO TABLE Table1
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
Col1,
Col2,
Col3,
create_dttm sysdate,
MySeq "myseq.nextval"
INFILE DataFile2
BADFILE DataFile2_bad.txt'
DISCARDFILE DataFile2_dsc.txt'
REPLACE
INTO TABLE "T2"
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
T2Col1,
T2Col2,
T2Col3
) -
Hi gurus,
I have a question about buildings sql trees over several tables.
This is the Output I hope for:
0
- 10, 'Company Blue1', 0
-- 101, 'Part Blue1', 10
--- 1001, 'Accounting Blue', 101
---- 10001, 'Hans Mueller', 1001
--- 1002, 'Special Problems Blue', 101
---- 10002, 'Stephen Meyer', 1002
---- 10003, 'Carlos Anceto', 1002
-- 102, 'Part Blue2', 10
--- 1003, 'Information Technology Blue', 102
---- 10004, 'Tobias Tries', 1003
- 20, 'Company Red1', 0
-- 201, 'Part Red1', 20
--- 2001, 'Accounting Red', 201
---- 20001, 'Carl Van Deser', 2001
---- 20002, 'Geromel Boats', 2002
- 30, 'Company Green1', 0
-- 301, 'Part Green1', 30
--- 3001, 'Accounting Green', 301
---- 30002, 'Peter Finnighan', 3001
--- 3003, 'Special Problems Green', 301
---- 30001, 'Loui Van Ecke', 3003This is the situation: I have 4 table which can be constraint by foreign key and all toghether build up a tree.
Table1:
tbl_company (c_id, company_name)
Values:
10, 'Company Blue1'
20, 'Company Red1'
30, 'Company Green1'
Table2:
tbl_company_parts (cp_id, part_name, company_id)
Values:
101, 'Part Blue1', 10
102, 'Part Blue2', 10
201, 'Part Red1',20
301, 'Part Green1',30
Table3:
tbl_departments (d_id, dept_name, part_id)
Values:
1, 'Accounting Blue', 101
2, 'Special Problems Blue', 101
3, 'Information Technology Blue', 102
4, 'Accounting Red',201
5, 'Accounting Green',301
6, 'Special Problems Green',301
Tablemployees (e_id, dept_name, department_id)
Values:
1, 'Hans Mueller', 1
2, 'Stephen Meyer', 2
3, 'Carlos AncetoÄ, 2
4, 'Carl Van Deser',4
5, 'Geromel Boats', 4
6, 'Loui Van Ecke',5
7, 'Peter Finnighan',6
8, 'Tobias Tries',3The problem is that I don't know how to concate alle these values and using the connect by clause creating this tree viewHi Tobias,
It was not exactly clear how the id's had to be calculated, but this example will get you going:
SQL> create table tbl_company (c_id, company_name)
2 as
3 select 10, 'Company Blue1' from dual union all
4 select 20, 'Company Red1' from dual union all
5 select 30, 'Company Green1' from dual
6 /
Table created.
SQL> create table tbl_company_parts (cp_id, part_name, company_id)
2 as
3 select 101, 'Part Blue1', 10 from dual union all
4 select 102, 'Part Blue2', 10 from dual union all
5 select 201, 'Part Red1',20 from dual union all
6 select 301, 'Part Green1',30 from dual
7 /
Table created.
SQL> create table tbl_departments (d_id, dept_name, part_id)
2 as
3 select 1, 'Accounting Blue', 101 from dual union all
4 select 2, 'Special Problems Blue', 101 from dual union all
5 select 3, 'Information Technology Blue', 102 from dual union all
6 select 4, 'Accounting Red',201 from dual union all
7 select 5, 'Accounting Green',301 from dual union all
8 select 6, 'Special Problems Green',301 from dual
9 /
Table created.
SQL> create table tbl_employees (e_id, emp_name, department_id)
2 as
3 select 1, 'Hans Mueller', 1 from dual union all
4 select 2, 'Stephen Meyer', 2 from dual union all
5 select 3, 'Carlos Anceto', 2 from dual union all
6 select 4, 'Carl Van Deser',4 from dual union all
7 select 5, 'Geromel Boats', 4 from dual union all
8 select 6, 'Loui Van Ecke',5 from dual union all
9 select 7, 'Peter Finnighan',6 from dual union all
10 select 8, 'Tobias Tries',3 from dual
11 /
Table created.
SQL> select coalesce
2 ( case when e.department_id is null then null else
3 trunc(d.part_id,-2) * 100 + dense_rank() over (partition by d.part_id order by e.e_id)
4 end
5 , trunc(d.part_id,-2) * 10 + dense_rank() over (partition by d.part_id order by d.d_id)
6 , p.cp_id
7 , c.c_id
8 , 0
9 ) id
10 , coalesce
11 ( e.emp_name
12 , d.dept_name
13 , p.part_name
14 , c.company_name
15 ) name
16 , coalesce
17 ( trunc(d.part_id,-2) * 10 + dense_rank() over (partition by d.part_id order by d.d_id)
18 , d.part_id
19 , p.company_id
20 , case grouping(c.c_id) when 0 then 0 end
21 ) parent_id
22 from tbl_company c
23 , tbl_company_parts p
24 , tbl_departments d
25 , tbl_employees e
26 where c.c_id = p.company_id
27 and p.cp_id = d.part_id
28 and d.d_id = e.department_id
29 group by rollup
30 ( ( c.c_id,c.company_name)
31 , ( p.cp_id,p.part_name,p.company_id)
32 , ( d.d_id,d.dept_name,d.part_id)
33 , ( e.e_id,e.emp_name,e.department_id)
34 )
35 order by c.c_id nulls first
36 , p.cp_id nulls first
37 , d.d_id nulls first
38 , e.e_id nulls first
39 /
ID NAME PARENT_ID
0
10 Company Blue1 0
101 Part Blue1 10
1001 Accounting Blue 1001
10001 Hans Mueller 1001
1002 Special Problems Blue 1002
10002 Stephen Meyer 1002
10003 Carlos Anceto 1002
102 Part Blue2 10
1001 Information Technology Blue 1001
10001 Tobias Tries 1001
20 Company Red1 0
201 Part Red1 20
2001 Accounting Red 2001
20001 Carl Van Deser 2001
20002 Geromel Boats 2001
30 Company Green1 0
301 Part Green1 30
3001 Accounting Green 3001
30001 Loui Van Ecke 3001
3002 Special Problems Green 3002
30002 Peter Finnighan 3002
22 rows selected.Regards,
Rob. -
Problem in creating a view over multiple tables
How do I create a View on top of the bCrelow tables. I have metioned the key fields of each table. How do i join the tables with common fields? I also need to create a Generic Extractor over the View later.
Z3PVR ( Custom Z table ):
Key Fields--> MANDT: Client; BUKRS: Company Code; WERKS: Plant; EBELN: Purchasing Document Number; ELNR: Accounting Document Number.
BSEG ( Cluster Table ):
Key Fields--> MANDT: Client; BUKRS: Company Code; BELNR: Accounting Document Number; GJAHR: Fiscal Year; BUZEI: Number of Line Item Within Accounting Document
CKIS ( Transparent Table ):
MANDT: Client; LEDNR: Ledger for Controlling objects; BZOBJ: Reference Object; KALNR: Cost Estimate Number for Cost Est. w/o Qty Structure; KALKA: Costing Type; KADKY: Costing Date (Key); TVERS: Costing Version; BWVAR: Valuation Variant in Costing; KKZMA: Costs Entered Manually in Additive or Automatic Cost Est.; POSNR: Unit Costing Line Item Number
BKPF ( Transparent Table ):
MANDT:Client; BUKRS: Company Code; BELNR: Accounting Document Number;
GJAHR: Fiscal Year
RV61A ( Structure ):
No Key
T001 ( Transparent Table ):
MANDT:Client; BUKRS:Company CodeHi Chintai,
To create a view you can use Tcode SE11.
In the first screen of view creation you must introduce the tables for the view and the conditions of join, remember to introduce always the field MANDT.
You are introducing too much tables in your view and if I am not wrong you can not introduce Structures in a view.
As a suggestion start introducing in View only the tables that contain all char you need and also the tables with a reduced number of keys (as T001).
You can then obtain other fields from other tables via CMOD.
To built up a Generic extractor on this view you can use Tcode RSO2.
You can find under www.service.sap.com/bi an How to doc on Generic DataSource creation.
Ciao.
Riccardo.
Maybe you are looking for
-
How do I fix Runtime error R 6034 and Error 7 (Windows error 1114)?
-
Since upgrading to itunes 7 and updating the ipod software, my ipod no longer syncs with my iCal calendars - despite having the option selected to do so. I have tried restoring from scratch to no avail. It does work if I copy the calendars manually i
-
Wht is EXIT-COMAND in module cancel at exit-command.?
HI Experts, curious to know that, code is, process after input. ***Exit command module cancel at exit-command. module cancel. module user_command_9001. module cancel input. case: ok_code. when wa_con_back. clear ok_code. set scree
-
Hi all i work on oracle 9i developper suite when i run the form run time the explorer run and then send me this message Internet Explorer has encountered a problem and needs to close. We are sorry for the inconvenience. could you help me should I sta
-
BPM issues in production support
I am in a Prod support and needs to handle errors and monitor the message flow with out flaws... I am new to BPM and Workflow... Could any one suggest me how to handle the monitoring and How to handle errors in the BPM and how to check the workflow r