Use of #tables in partition query definition
hi everyone,
I have a cube with 10 partitions i am trying to use the query logic in each partition something like below
create table #tmp1
insert into #tmp1(col1) select col1 from bags where month=01
create table #tmp2
insert into #tmp2(col2)
select col1 from books where month=01
create index on #tmp1
create index on #tmp2
select * from #tmp1 a inner join #tmp2 b on a.col1=b.col1
the month in where conditions get changes for different partitions.
error ::
The SQL syntax is not valid. The relational database returned the following error message: The metadata could not be determined because statement 'SELECT col1 uses a temp table.
please suggest any other alternatives. apart from usage of physical tables
Thanks
Praxykj
Praxy
I am using view already in the partition query. Rephrasing the queries used as shown below.
create table #tmp1
insert into #tmp1(col1) select
col1 from Vwbags where month=01
create table #tmp2
insert into #tmp2(col2)
select col1 from Vwbooks where month=01
create index on #tmp1
create index on #tmp2
select * from #tmp1 a inner join #tmp2 b on a.col1=b.col1
Praxy
Similar Messages
-
Cost of using subquery vs using same table twice in query
Hi all,
In a current project, I was asked by my supervisor what is the cost difference between the following two methods. First method is using a subquery to get the name field from table2. A subquery is needed because it requires the field sa_id from table1. The second method is using table2 again under a different alias to obtain table2.name. The two table2 are not self-joined. The outcome of these two queries are the same.
Using subquery:
select a.sa_id R1, b.other_field R2,
(select b.name from b
where b.b_id = a.sa_id) R3
from table1 a, table2 b
where ...Using same table twice (table2 under 2 different aliases)
select a.sa_id R1, b.other_field R2, c.name R3
from table1 a, table2 b, table2 c
where
c.b_id = a.sa_id,
and ....Can anyone tell me which version is better and why? (or under what circumstances, which version is better). And what are the costs involved? Many thanks.pl/sql novice wrote:
Hi all,
In a current project, I was asked by my supervisor what is the cost difference between the following two methods. First method is using a subquery to get the name field from table2. A subquery is needed because it requires the field sa_id from table1. The second method is using table2 again under a different alias to obtain table2.name. The two table2 are not self-joined. The outcome of these two queries are the same.
Using subquery:
Using same table twice (table2 under 2 different aliases)
Can anyone tell me which version is better and why? (or under what circumstances, which version is better). And what are the costs involved? Many thanks.In theory, if you use the scalar "subquery" approach, the correlated subquery needs to be executed for each row of your result set. Depending on how efficient the subquery is performed this could require significant resources, since you have that recursive SQL that needs to be executed for each row.
The "join" approach needs to read the table only twice, may be it can even use an indexed access path. So in theory the join approach should perform better in most cases.
Now the Oracle runtime engine (since Version 8) introduces a feature called "filter optimization" that also applies to correlated scalar subqueries. Basically it works like an in-memory hash table that caches the (hashed) input values to the (deterministic) correlated subquery and the corresponding output values. The number of entries of the hash table is fixed until 9i (256 entries) whereas in 10g it is controlled by a internal parameter that determines the size of the table (and therefore can hold different number of entries depending on the size of each element).
If the input value of the next row corresponds to the input value of the previous row then this optimization returns immediately the corresponding output value without any further action. If the input value can be found in the hash table, the corresponding output value is returned, otherwise execute the query and keep the result combination and eventually attempt to store this new combination in the hash table, but if a hash collision occurs the combination will be discarded.
So the effectiveness of this clever optimization largely depends on three different factors: The order of the input values (because as long as the input value doesn't change the corresponding output value will be returned immediately without any further action required), the number of distinct input values and finally the rate of hash collisions that might occur when attempting to store a combination in the in-memory hash table.
In summary unfortunately you can't really tell how good this optimization is going to work at runtime and therefore can't be properly reflected in the execution plan.
You need to test both approaches individually because in the optimal case the optimization of the scalar subquery will be superior to the join approach, but it could also well be the other around, depending on the factors mentioned.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Runtime error using range table in select query
I have to select tcodes from table tstc, based on the entries in ust12,
the entries in ust12-von and ust12-bis, these contains wild charcters also, and i have to selct all the tcodes from von to bis.
so ia m preparing a range table for the entries in ust12, and querying table tstc, then i am getting a runtime error with following description.
'If the problem occurred because an excessively large table was used
in an IN itab construct, you can use FOR ALL ENTRIES instead.'
but how can i use FOR ALL ENTRIES here, because if von = A* and bis = AB*,then i ahve to read all the entries from AAAA till ABZZ (may be something like this),
is there any way to write this query, with out runtime error.
there are total 15000 entries in ust12, i am preparing range table for 3000 entries each and querying tstc.
Thanks in advance
Best Regards
Amarender Reddy BHi,
first write a select on ust12 based on ust12-von and ust12-bis.
eg: select von bis from ust12 into table gt_ust12
where von LIKE 'A%'
and bis LIKE 'AB%'.
now write another select for tstc for all entries in gt_ust12...
Hope it helps
Regards,
Pavan -
Using specific table index in query
We have a project that has a need of specifying specific table index in its queries. Does TopLink support this when Expressions are used to specify query criteria? Thanks.
HaiweiUse hints.
http://www.oracle.com/technology/products/ias/toplink/doc/1013/main/_html/qryadv008.htm -
Plz Help : Can use drop table in dataset query...?
Can I use this script in my dataset Query in SSRS 2008 R2?
Hi NafisehPanahi,
Reporting Services provides both a graphical query designer and a text-based query designer for creating queries to retrieve data from a relational database for a report dataset in Report Designer.
The text-based query designer does not preprocess the query and can accommodate any kind of query syntax, therefore, you can select “Edit as Text” to type the same command to SQL Server Management Studio (SSMS).
The graphical query designer supports three types of query commands: Text which supports standard Transact-SQL query text for relational database data sources, StoredProcedure, or TableDirect.
For the details, please see the links below:
Graphical Query Designer User Interface
Text-based Query Designer User Interface
Regards,
Heidi Duan
Heidi Duan
TechNet Community Support -
Hi all,
Can we use internal table in ABAP/SAP Query (Infoset - SQ02)? If yes, Pls guide me on the same.
Thanks in advance
Regards
Madhumathi Ato my knowledge u can't use internal tables in ABAP query...it is a mix of tables...
-
Dbms_redefinition used to convert a non partitioned table to partitioned
A table is created with below DDL
CREATE TABLE TEST1("EQUIPMENT_DIM_ID" NUMBER(9,0) NOT NULL ENABLE,
"CARD_DIM_ID" NUMBER(9,0),
"NH21_DIM_ID" NUMBER(5,0) NOT NULL ENABLE);
Interim table created with
CREATE TABLE INTERIM("EQUIPMENT_DIM_ID" NUMBER(9,0) NOT NULL ENABLE,
"CARD_DIM_ID" NUMBER(9,0),
"NH21_DIM_ID" NUMBER(5,0) NOT NULL ENABLE);
PARTITION BY RANGE ("EQUIPMENT_DIM_ID")
(PARTITION "P0" VALUES LESS THAN (1)
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS ,
PARTITION "P1" VALUES LESS THAN (2)
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS ,
Performed dbms_Redefinition (start,sync and finish) to get the table test1 data from nonpartitioned to partitioned one. At the end of table conversion, dbms_metadata.get_ddl shows like below
CREATE TABLE TEST("EQUIPMENT_DIM_ID" NUMBER(9,0) CONSTRAINT "SYS_C005605" NOT NULL ENABLE,
"CARD_DIM_ID" NUMBER(9,0),
"NH21_DIM_ID" NUMBER(5,0) CONSTRAINT "SYS_C005601" NOT NULL ENABLE);
PARTITION BY RANGE ("EQUIPMENT_DIM_ID")
(PARTITION "P0" VALUES LESS THAN (1)
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS ,
PARTITION "P1" VALUES LESS THAN (2)
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS
Can you help me how to hide or remove the "CONSTRAINT "SYS_C005605"" etc section showing in ddl for the table?The reason being if take this ddl definition and load in another database where it can give error in case same named constraint already exist.
Many thanks
Regards
Manoj Thakkan
Oracle DBA
BangaloreCreate your NOT NULL check constraints using ALTER TABLE and name them as you would a primary or foreign key ... with a name that makes sense.
I personally have a strong dislike for system generated naming and would be thrilled if this lazy practice of defining columns as NOT NULL during table creation went away. -
Problem executing a partition query using occi in c++
i am trying to execute a simple select query which returns a row from the table. The query is
m_stmt->setSQL("select * from table partition(:1) where mdn = :2");
m_stmt->setString(1,"P9329");
//m_stmt->setInt(2,9320496213);
ResultSet * rs = m_stmt->executeQuery();
while(rs->next())
cout<<"the value of preferences is aaaaaaaaaaaa"<< rs->getString(3);
The problems that i am facing are as follows :
1) if i execute the query using the actual values in the select query, it seems to be working fine, but when i try the above method to make it more dynamic (the values shown would be replaced by variables) it is giving me the following errors :
a)if i put the partition value as a position parameter and put the mdn as the direct value in the query then it says the SQL command not ended properly
b) if i put the partition value directly and put the mdn as a position parameter then the error is "Invalid Character "
Any help would be much appreciated..ty in advanceHi Leonard,
Thanks for letting me know that...thats pretty disappointing. Looks like I'll have to change my strategy in my implementation.
Do you know if I can also develop functions using Acrobats SDK library methods such as "PDDocCreate()", "PDDocSave", etc. in OLE [MFC] applications?
The reason why I ask is because I have previously created a plugin that creates a PDF file and embeds a 3D annotation... so this would be the same sort of idea that the 3D Tool Menu Item achieves.
Now, if I were to use the function code within my OLE application, I will have to also include the PIMain.c file in my project as well correct?
I hope this idea is a good one... please let me know if this approach is possible.
Thanks. -
ORA-12060: shape of prebuilt table does not match definition query
Oracle version: 11G Release 2
When Iam trying to create a Materialized view with on prebuilt table syntax I am facing the below issue.
Create table sample_table as select col1,col2,col3 from sample_view;
table created.
Create Materialized view sample_table on prebuilt table refresh complete on demand as
select col1,col2,col3 from sample_view;
I am getting the below exception
Error report:
SQL Error: ORA-12060: shape of prebuilt table does not match definition query
12060. 00000 - "shape of prebuilt table does not match definition query"
*Cause: The number of columns or the type or the length semantics of a
column in the prebuilt table did not match the materialized
view definition query.
*Action: Reissue the SQL command using BUILD IMMEDIATE, BUILD DEFERRED, or
ensure that the prebuilt table matches the materialized view
definition query.
How to resolve this issue?SQL> create table sample_table as
2 select owner, table_name, tablespace_name
3 from dba_tables
4 where rownum < 11;
Table created.
SQL> Create Materialized view sample_table on prebuilt table refresh complete on demand as
2 select owner, table_name, tablespace_name
3 from dba_tables;
Materialized view created.What issue?
Which leads me to ask what version of Oracle you have because we don't know.
SELECT *
FROM v$version; -
Partitioning - query on large table v. query accessing several partitions
Hi,
We are using partitioning on a large fact table, however, in deciding partitioning strategy looking for advice regarding queries which have to access several partitions versus query against a large table.
What is quicker - a query which acccesses a large table or a query which accesseses several partitions to return results. I
Need to partition due to size/admin etc. but want to make sure queries which need to access > 1 partition are not significantly slower than ones which access a large table by comparison.
Ones which access just one partition fine but some queries have to accesse several partitions
Many ThanksHere are your choices stated another way. Is it better to:
1. Get one weeks data by reading one month's data and throwing away 75% of it (assumes partitioning by month)
2. Get one weeks data by reading three weeks of it and throwing away part of two weeks? (assumes partitioning by week)
3. Get one weeks data by reading seven daily partitions and not having to throw away any of it? (assumes daily partitioning)
I have partitioned as frequently as every 5-15 minutes (banking and telecom) and have yet to find a situation where partitions larger than the minimum date-range for the majority of queries makes sense.
Anyone can insert data into a table ... an extra millisecond per insert is generally irrelevant. What you want to do is optimize reading the data where that extra millisecond per row, over millions of rows, adds up to measurable time.
But this is Oracle so the best answer to your questions is to recommend you not take anyone advice on this but rather run some tests with real data, in real-world volumes, with real-world DML and queries. -
What table does BI query use?
hi all,
i'm new to BI. how i know the table the BI query uses? and one more question, can i review the abap code of the BI query?
thanks in advance for your effort.
PeerasitHi,
the query is created on a specific provider. Each provider consists of a bunch of tables --> fact tables, dimension tables. Additionally the sid and master data tables of the characteristics are used for reporting, but that depends pretty much on your query definition.
For the code, goto transaction rsrt, enter the name of the query and click on the button technical information. In the upcoming list you will find the name of the generated program.
regards
Siggi -
Query Needs to be changed to only use 1 Table
I would appreciate some help with changing this query to only use the table "ain.impl_oh_order_header oh table"
SELECT oh1.partner "Dealer",
od.sales_code "Dealer Sales Code",
oh1. external_order_number "Original Dealer Order Number",
oh1.region,
cust. billing_account_number "BAN",
dsl.n_order "DSL OrderNo",
dsl.tn_or_ftn "DSL TN",
dsl.activation_date "DSL Activation",
telco.tn "ACCESS_TN",
to_char( oh1.created_ts, 'YYYY-MM-DD') "Original Order Date",
pack1.display_name "Original Order Description",
pack1.price "Original Plan Price",
uname. user_first_name "Agent First Name",
uname.user_last_name "Agent Last Name",
uname.user_name "Agent ID",
-- to_char(dh.timestamp, 'yyyy-mm-dd HH:MI') "USCS Button Pressed",
to_char(oh2.created_ts, 'YYYY-MM-DD') "USCS Order Date",
pack2.display_name "USCS - Order Description",
CASE
WHEN cd.state IN ('1','15') THEN 'RECEIVED'
WHEN cd.state IN ('2') THEN 'PROCESSING'
WHEN cd.state IN ('3','8','16','18') THEN 'COMPLETE'
WHEN cd.state IN ('4','9','17') THEN 'CANCELED'
WHEN cd.state IN ('6','10','13','14') THEN 'PENDING'
WHEN cd.state IN ('7') THEN 'INCOMPLETE'
WHEN cd.state IN ('12','19','20') THEN 'SUBMITTED'
ELSE 'OTHER'
END AS state_desc,
pack2.price "USCS - Plan Price",
pack2.price - pack1.price "Net Change",
ina.first_name,
ina.last_name,
ina.add_line_1,
ina.city,
ina. STATE,
ina.zip
FROM ain.impl_oh_order_header oh1
INNER JOIN ain.impl_oh_order_header oh2 ON oh1.external_order_number = oh2.external_order_number
INNER JOIN ain.impl_order_data od ON oh1.transaction_id = od.transaction_id
INNER JOIN ain.impl_package pack1 ON oh1.transaction_id = pack1.transaction_id
INNER JOIN ain.impl_package pack2 ON oh2.transaction_id = pack2.transaction_id
INNER JOIN ain.impl_name_address ina ON oh1.transaction_id = ina.transaction_id
INNER JOIN ain.impl_customer cust ON oh2.transaction_id = cust.transaction_id
LEFT OUTER JOIN ain.impl_dsl dsl ON oh2.transaction_id = dsl.transaction_id
LEFT OUTER JOIN ain.impl_access telco ON oh2.transaction_id = telco.transaction_id
INNER JOIN ain.sncr_order_curr_disp cd ON cd.transaction_id = oh2.transaction_id
INNER JOIN AIN.sncr_order_disp_head dh ON oh1.transaction_id = dh.transaction_id
INNER JOIN ain.sncr_order_disposition disp ON dh.disp_transaction_id = disp.disp_transaction_id
INNER JOIN ain.sncr_ssm_principal uname ON uname.user_id = dh.user_id
WHERE oh2.created_ts BETWEEN to_date('3/01/2012 00:00:00','mm/dd/yyyy hh24:mi:ss') AND to_date('3/31/2012 23:59:00', 'mm/dd/yyyy hh24:mi:ss')
AND oh1.uscs = 0 AND oh2.uscs = 1
AND oh1.external_order_number NOT LIKE '%PROD_TEST%'
AND oh2.external_order_number NOT LIKE '%PROD_TEST%'
AND oh1.order_type = 'ORDER'
AND oh2.order_type = 'ORDER'
AND pack1.product_type = 'ORDER'
AND pack2.product_type = 'ORDER'
AND disp.category = 110 AND disp.state = 5
AND disp.trx_seq = (SELECT MAX(trx_seq)
FROM ain.sncr_order_disposition sod, ain.sncr_order_disp_head sodh
WHERE sodh.disp_transaction_id = sod.disp_transaction_id
AND sodh.transaction_id = oh1.transaction_id
AND sod.category = 110 AND sod.state = 5)
AND pack1.offer_id IS NOT NULL
AND pack2.offer_id IS NOT NULL
AND ina.type = 'SERVICE'
AND cd.category = 100
ORDER BY to_char(oh2.created_ts, 'YYYY-MM-DD')Hi,
Try:
select
oh.*
from
ain.impl_oh_order_header ohIf that is not what you want you have to give more info about your tables, what you want etc. etc.
See FAQ: how to ask a question
Regards,
Peter -
Hi,
I am using Access 2013 and I have the following VBA code,
strSQL = "INSERT INTO Master SELECT * from Master WHERE ID = 1"
DoCmd.RunSQL (strSQL)
when the SQL statement is run, I got this error.
SELECT * cannot be used in an INSERT INTO query when the source or destination table contains a multivalued field
Any suggestion on how to get around this?
Please advice and your help would be greatly appreciated!Rather than modelling the many-to-many relationship type by means of a multi-valued field, do so by the conventional means of modelling the relationship type by a table which resolves it into two one-to-many relationship types. You give no indication
of what is being modelled here, so let's assume a generic model where there is a many-to-many relationship type between Masters and Slaves, for which you'd have the following tables:
Masters
....MasterID (PK)
....Master
Slaves
....SlaveID (PK)
....Slave
and to model the relationship type:
SlaveMastership
....SlaveID (FK)
....MasterID (FK)
The primary key of the last is a composite one of the two foreign keys SlaveID and MasterID.
You appear to be trying to insert duplicates of a subset of rows from the same table. With the above structure, to do this you would firstly have to insert rows into the referenced table Masters for all columns bar the key, which, presuming this to be
an autonumber column, would be assigned new values automatically. To map these new rows to the same rows in Slaves as the original subset you would then need to insert rows into SlaveMastership with the same SlaveID values as those in Slaves referenced
by those rows in Slavemastership which referenced the keys of the original subset of rows from Masters, and the MasterID values of the rows inserted in the first insert operation. This would require joins to be made between the original and the new subsets
of rows in two instances of Masters on other columns which constitute a candidate key of Masters, so that the rows from SlaveMastership can be identified.
You'll find examples of these sort of insert operations in DecomposerDemo.zip in my public databases folder at:
https://onedrive.live.com/?cid=44CC60D7FEA42912&id=44CC60D7FEA42912!169
If you have difficulty opening the link copy its text (NB, not the link location) and paste it into your browser's address bar.
In this little demo file non-normalized data from Excel is decomposed into a set of normalized tables. Unlike your situation this does not involve duplication of rows into the same table, but the methodology for the insertion of rows into a table which
models a many-to-many relationship type is broadly the same.
The fact that you have this requirement to duplicate a subset of rows into the same table, however, does make me wonder about the validity of the underlying logical model. I think it would help us if you could describe in detail just what in real world
terms is being modelled by this table, and the purpose of the insert operation which you are attempting.
Ken Sheridan, Stafford, England -
Using plsql tables in select statement of report query
Hi
Anyone have experience to use plsql table to select statement to create a report. In otherwords, How to run report using flat file (xx.txt) information like 10 records in flat files and use this 10 records to the report and run pdf files.
thanks in advance
sureshhi,
u can use the utl_file package to do that using a ref cursor query in the data model and u can have this code to read data from a flat file
declare
ur_file utl_file.file_type;
my_result varchar2(250);
begin
ur_file := UTL_FILE.FOPEN ('&directory', '&filename', 'r') ;
utl_file.get_line(ur_file, my_result);
dbms_output.put_line(my_result);
utl_file.fclose(ur_file);
end;
make sure u have an entry in ur init.ora saying that your
utl_file_dir = '\your directory where ur files reside'
cheers!
[email protected] -
Query performance improvement using pipelined table function
Hi,
I have got two select queries one is like...
select * from table
another is using pielined table function
select *
from table(pipelined_function(cursor(select * from table)))
which query will return result set more faster????????
suggest methods for retrieving dataset more faster (using pipelined table function) than a normal select query.
rgds
somyCompare the performance between these solutions:
create table big as select * from all_objects;
First test the performance of a normal select statement:
begin
for r in (select * from big) loop
null;
end loop;
end;
/Second a pipelined function:
create type rc_vars as object
(OWNER VARCHAR2(30)
,OBJECT_NAME VARCHAR2(30));
create or replace type rc_vars_table as table of rc_vars ;
create or replace
function rc_get_vars
return rc_vars_table
pipelined
as
cursor c_aobj
is
select owner, object_name
from big;
l_aobj c_aobj%rowtype;
begin
for r_aobj in c_aobj loop
pipe row(rc_vars(r_aobj.owner,r_aobj.object_name));
end loop;
return;
end;
/Test the performance of the pipelined function:
begin
for r in (select * from table(rc_get_vars)) loop
null;
end loop;
end;
/On my system the simple select-statement is 20 times faster.
Correction: It is 10 times faster, not 20.
Message was edited by:
wateenmooiedag
Maybe you are looking for
-
in ical I just added new calendars to a pre-existing calendar group, I can make events with these calendars, but not reminders, any suggestions?
-
I have recently downloaded the ovi maps update for free navigation on my 5800 however now I seem to have 2 programs in my installed apps menu, Ovi maps and Nokia maps. This is causing me some problems as my phones built in memory is now completely fu
-
How do i get dual monitors to work with bootcamp (windows 7)
i have a macbook pro intel core i7 using a VGA adapter for connection of the monitor when i use the mac side the dual screens work perfectly, but when i use bootcamp with windows 7 the screen stays blank, i know it ver recognizes it because when i dr
-
How to search markers in the event browser
Hi ! I work on long documentaries and I have hours of interview I go through, and I put markers on it. option - M, and a few words describing what the person in saying, so that I can quickly come back to that moment if needed. The thing is that I can
-
ASA5510 - Accessing Anyconnect via other local Interface
Hello - I hope someone can help. I have a scenario where there is an ASA5510 configured as follows: Interface0 = Outside Interface1 = LAN Interface2 = DMZ Interface3 = unused Running ASA version 8.2[1] All network operations are fine, as are the IPSE