Want to avoid temp tables
Hi.
I need some suggestion desparately.I am new to ODI and i have to learn myself.
I dont want temp tables to be created. is it possible?
Thanks
Hi my last question is when i am creating target DataServer then Physical Schema
I can see two schema that is SCHEMA & WORKSCHEMA
Are both schema should same if not then while i am using IKM KM , does that I$ table is going to be created in the WORKSCHEMA or not.
if i am giving WORKSCHEMA differnt than SCHEMA , then my SCHEMA should have the privillege TO create temp table in WORKSCHEMA.
example SCHEMA (SIMSDT1)
WORKSCHEMA ( ODIDT1)
now during I$ table creation the command should be like this as i guess (staging different than target)
in SIMSDT1
create table ODIDT1. I$ (aa number)
Now when i saw IKM create flow table is in current execution, so it should create temp table in SIMSDT1
please suggest i am right or wrong. Really confused
Thanks
Similar Messages
-
Hi All, We are in process of getting ride of the TEmproary tables. As usual DBAS are giving hard time.
Please see these After paramters form Trigger. It creates atable TEMP_SERVICES .Can I avoid the table.
Can I do the calcuations in PL/sql.
Please help
BEGIN
DECLARE
WS_SERVICE VARCHAR2(7) := NULL;
WS_SSN VARCHAR2(9) := NULL;
C_SERVICE VARCHAR2(7) := NULL;
C_SSN VARCHAR2(9) := NULL;
CURSOR INPUT_SERVICES IS
SELECT SERVICE, SSN FROM SERVICES
WHERE to_char(PROVIDER) like :up_provider
ORDER BY 1,2;
input_rec input_services%ROWTYPE;
begin
SRW.DO_SQL('create table temp_services
( service varchar2(7),
ssn varchar2(9))');
OPEN INPUT_SERVICES;
LOOP
FETCH INPUT_SERVICES INTO INPUT_REC;
EXIT WHEN INPUT_SERVICES%NOTFOUND;
C_SERVICE := INPUT_REC.SERVICE;
C_SSN := INPUT_REC.SSN;
IF INPUT_SERVICES%ROWCOUNT=1 THEN
WS_SERVICE := C_SERVICE;
WS_SSN := C_SSN;
ELSIF C_SERVICE != WS_SERVICE THEN
SRW.DO_SQL('INSERT INTO TEMP_SERVICES
VALUES('''||WS_SERVICE||''','''||WS_SSN||''')');
SRW.DO_SQL('COMMIT');
WS_SERVICE := C_SERVICE;
WS_SSN := C_SSN;
ELSIF C_SSN != WS_SSN THEN
SRW.DO_SQL('INSERT INTO TEMP_SERVICES
VALUES('''||WS_SERVICE||''','''||WS_SSN||''')');
SRW.DO_SQL('COMMIT');
WS_SERVICE := C_SERVICE;
WS_SSN := C_SSN;
END IF;
END LOOP;
CLOSE INPUT_SERVICES;
SRW.DO_SQL('INSERT INTO TEMP_SERVICES
VALUES('''||WS_SERVICE||''','''||WS_SSN||''')');
SRW.DO_SQL('COMMIT');
END;
return (TRUE);
end;
Please helpSee:
"Using a Collection Instead of a Temporary Table in Complex Reports"
http://www.quest-pipelines.com/pipelines/plsql/tips03.htm#NOVEMBER
Regards,
Zlatko Sirotic -
How to avoid full Table scan when using Rule based optimizer (Oracle817)
1. We have a Oracle 8.1.7 DB, and the optimizer_mode is set to "RULE"
2. There are three indexes on table cm_contract_supply, which is a large table having 28732830 Rows, and average row length 149 Bytes
COLUMN_NAME INDEX_NAME
PROGRESS_RECID XAK11CM_CONTRACT_SUPPLY
COMPANY_CODE XIE1CM_CONTRACT_SUPPLY
CONTRACT_NUMBER XIE1CM_CONTRACT_SUPPLY
COUNTRY_CODE XIE1CM_CONTRACT_SUPPLY
SUPPLY_TYPE_CODE XIE1CM_CONTRACT_SUPPLY
VERSION_NUMBER XIE1CM_CONTRACT_SUPPLY
CAMPAIGN_CODE XIF1290CM_CONTRACT_SUPPLY
COMPANY_CODE XIF1290CM_CONTRACT_SUPPLY
COUNTRY_CODE XIF1290CM_CONTRACT_SUPPLY
SUPPLIER_BP_ID XIF801CONTRACT_SUPPLY
COMMISSION_LETTER_CODE XIF803CONTRACT_SUPPLY
COMPANY_CODE XIF803CONTRACT_SUPPLY
COUNTRY_CODE XIF803CONTRACT_SUPPLY
COMPANY_CODE XPKCM_CONTRACT_SUPPLY
CONTRACT_NUMBER XPKCM_CONTRACT_SUPPLY
COUNTRY_CODE XPKCM_CONTRACT_SUPPLY
SUPPLY_SEQUENCE_NUMBER XPKCM_CONTRACT_SUPPLY
VERSION_NUMBER XPKCM_CONTRACT_SUPPLY
3. We are querying the table for a particular contract_number and version_number. We want to avoid full table scan.
SELECT /*+ INDEX(XAK11CM_CONTRACT_SUPPLY) */
rowid, pms.cm_contract_supply.*
FROM pms.cm_contract_supply
WHERE
contract_number = '0000000000131710'
AND version_number = 3;
However despite of giving hint, query results are fetched after full table scan.
Execution Plan
0 SELECT STATEMENT Optimizer=RULE (Cost=1182 Card=1 Bytes=742)
1 0 TABLE ACCESS (FULL) OF 'CM_CONTRACT_SUPPLY' (Cost=1182 Card=1 Bytes=742)
4. I have tried giving
SELECT /*+ FIRST_ROWS + INDEX(XAK11CM_CONTRACT_SUPPLY) */
rowid, pms.cm_contract_supply.*
FROM pms.cm_contract_supply
WHERE
contract_number = '0000000000131710'
AND version_number = 3;
and
SELECT /*+ CHOOSE + INDEX(XAK11CM_CONTRACT_SUPPLY) */
rowid, pms.cm_contract_supply.*
FROM pms.cm_contract_supply
WHERE
contract_number = '0000000000131710'
AND version_number = 3;
But it does not work.
Is there some way without changing optimizer mode and without creating an additional index, we can use the index instead of full table scan?David,
Here is my test on a Oracle 10g database.
SQL> create table mytable as select * from all_tables;
Table created.
SQL> set autot traceonly
SQL> alter session set optimizer_mode = choose;
Session altered.
SQL> select count(*) from mytable;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'MYTABLE' (TABLE)
Statistics
1 recursive calls
0 db block gets
29 consistent gets
0 physical reads
0 redo size
223 bytes sent via SQL*Net to client
276 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> analyze table mytable compute statistics;
Table analyzed.
SQL> select count(*) from mytable
2 ;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'MYTABLE' (TABLE) (Cost=11 Card=1
788)
Statistics
1 recursive calls
0 db block gets
29 consistent gets
0 physical reads
0 redo size
222 bytes sent via SQL*Net to client
276 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options -
Hello !
I am really stuck on a concept. I want to use temp tables inside a transaction, and when I create the temp table inside my transaction I use
CREATE GLOBAL TEMPORARY TABLE Y ON COMMIT DELETE ROWS AS SELECT * FROM JOBS;
However, when not having sent a commit, I perform a select from this table and there is no data. It seems to me that Oracle is perfroming an implicit commit here.
Have I mis understood this?
Is there any way to place the actual create table statement in transactional control? Alternitivly is there any way I can create to the specification of a table without actually having the overhead of returing rows?
Many ThanksThe statement being a DDL statement, Oracle will perform commit before and after the statement. This will remove any rows that were inserted by your statement.
You need to separate it into two statements:
SQL> CREATE GLOBAL TEMPORARY TABLE Y ON COMMIT DELETE ROWS AS SELECT * FROM DBA_JOBS WHERE 1 = 2 ;
Table created.
SQL> SELECT COUNT(*) FROM Y ;
COUNT(*)
0
1 row selected.
SQL> INSERT INTO Y SELECT * FROM DBA_JOBS ;
2 rows created.
SQL> SELECT COUNT(*) FROM Y ;
COUNT(*)
2
1 row selected.
SQL> COMMIT ;
Commit complete.
SQL> SELECT COUNT(*) FROM Y ;
COUNT(*)
0
1 row selected.
SQL> -
How to create Temp Table?
Hi..
I gotta output 2 table in my database. And, want to output
all fields of such table in one grid. So, can I create temp table
in Coldfusion MX7? Here is my table structure of two tables.
table 1) Company_name, exp_Value
table 2) Company_name, imp_Value
I want to create temp table like
temp table ) Company_name (table1 + table 2) , exp_value
(table 1), imp_value (table 2)
so, how can I solve this problem.?
All solutions will be appreciated.
best,
Z(Z) wrote:
>
quote:
Originally posted by:
cf_dev2
> Try using a FULL OUTER JOIN
>
> select coalesce(t1.Company_name, t2.Company_name) AS
Company_name,
> t1.exp_Value, t2.imp_Value
> from table1 as t1 full outer join table2 as t2
> on t1.Company_name = t2.Company_name
>
> Thanks, buddy. But, it's not ok for my project. Because,
million of records
> are in each tables. If I do so, page loading is really
huge and cannot be
> displayed within 10 seconds. Another solutions?
>
> All of solutions will be appreciated..
>
>
>
another solutions?
well, what exactly are you trying to do if none of the
solutions given
so far work for you?
i believe the suggested solutions covered all possibilities,
save for
UNION query, which, from what i can understand from your
explanations,
is NOT what you need either...
if cf_dev2's OUTER JOIN suggestion is "not ok" for your
project - i
don't know which one will be "ok" then...
how many records are you expecting in your "temp table" if
not all
records you have in the db? how can you expect fewer than all
records if
using INNER JOIN is not acceptable to you because "some
company_names
would be lost if table (1) company_name isn't included in
table (2)"???
you really should look at normalizing your db, i think...
Azadi Saryev
Sabai-dee.com
http://www.sabai-dee.com -
Want to use the same #TEMP table for multiple datasets in SSRS 2005
I am using Visual Studio 2005 to create SSRS 2005 reports. The report will consist of two different matrix from two different dataset that will gather data from the same data source. The first dataset is a procedure that inserts data from a query into
a #temp table and outputs it in the first matrix. I want the second dataset to be a different procedure that references the same #temp table. When the report is pulled then can the #temp table be dropped.Try in the Dataset properties. In the Query tab, under and to the right of "Data source:" click the box with the elipses (...). In the General tab, toward the bottom, there is a check box beside "Use single transaction".
Rakesh M J | MCTS,MCITP ( SQL SERVER 2008 )
Dont forget to mark it as Answered if found useful |
myspeakonbi -
i have one "LOGTABLE" having 3000 records, in this table daily 3-4 records will be inserted.this table have 11colums no primary key columns.
i created another temp table as "LOGTABLEMONITOR". (copied from "LOGTABLE" having 3000 records)
i am expecting which are inserting daily 3-4 records in "LOGTABLE" will be inserting temp table "LOGTABLEMONITOR" (if that 3-4 records are not exist in "LOGTABLEMONITOR")SELECT <columns>
FROM LOGTABLE
EXCEPT
SELECT <columns>
FROM LOGTABLEMONITOR;
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Table or Object type - like #temp table in SQL Server
Hi
I need to create a temp table to hold certain data and then validate. What is the best way to do this oracle. Something similar to #temp tables in SQL Server.
ThanksIN Oracle, you create the temporary table once, before you start your program. Then anyone can use that definition, but the system keeps the data isolated to eachr/session.
The difference in using Oracle: all DDL, including creating temp tables, performs commits and aquires locks that you want to avoid. It creates unnecessary serialization, causes transactional consistency issues and puts Oracle's Read Consistent model at risk (of ORA-01555 errors).
So, you (or the DBA) would "CREATE GLOBAL TEMPORARY TABLE ..." with the appropriate definition you want, and indicate whether you want the data deleted on commit, or on logoff.
Then you write your procedure, similar to the way you would do it in SQL Server, but you would not bracket it with creating/dropping the temp table - no need. -
Best options to use in Temp Table
Hello,
I was just trying to figure out the best options we can choose with when we come across a scenario where we need to use a Temp Table/Table Variable/Create
a Temp Table on the fly.
However, I could not see any big difference in using those options. As per my understanding using a table variable is more convenient if the query logic is
small and the result set also will be comparatively small.Creating a temp table is also an easy option but it takes much time and we can not create any indexes on it. I am working on a query optimization task where in plenty of temp tables are used
and the query takes more than five minutes to execute. We have created few indexes and all in few tables and reduced the query execution time up to 2 mnts.Can anyone give me more suggestions on it. I have gone through various articles about it and came to
know that there is no one solution for this and I am aware of the basic criteria like use Set No count on, Order the table in which the indexes are created, Do not use Select * instead use only columns which are really required, Create Indexes
and all. Other than these I am stuck with the usage of temp tables. There are some limitations where I can convert all the Temp table logic to CTE (I am not saying its not possible, I really dont have time to spend for the conversion). Any suggestions are
welcome.
Actual Query
select Code,dbo.GetTranslatedText(Name,'en-US')
as Name from ProductionResponse.ProductionResponse
00.00.02
5225 rows
With Table Variable
DECLARE @General
TABLE(Code
NVarchar(Max),Name
NVarchar(Max)
INSERT
INTO @General
select Code,dbo.GetTranslatedText(Name,'en-US')
AS Name from ProductionResponse.ProductionResponse
select
* from @General
00.00.03
5225 rows
With an Identity Column
DECLARE @General
TABLE(Id
INT IDENTITY(1,1)
,Code NVarchar(Max),Name
NVarchar(Max)
INSERT
INTO @General
select Code,dbo.GetTranslatedText(Name,'en-US')
AS Number from ProductionResponse.ProductionResponse
select
* from @General
00.00.04
5225 rows
With Temp Table:
CREATE
TABLE #General (Id
INT IDENTITY(1,1)
PRIMARY KEY,Code
NVarchar(Max),Name
NVarchar(Max)
INSERT
INTO #General
select Code,dbo.GetTranslatedText(Name,'en-US')
as Name from ProductionResponse.ProductionResponse
select
* from #General
DROP
TABLE #General
00.00.04
5225 rows
With Temp Table on the Fly
SELECT G.Code,G.Name
INTO #General
FROM
select Code,dbo.GetTranslatedText(Name,'en-US')
as Name from ProductionResponse.ProductionResponse
)G
select
* from #General
00.00.04
5225 rows>> I was just trying to figure out the best options we can choose with when we come across a scenario where we need to use a Temp Table/Table Variable/Create a Temp Table on the fly. <<
Actually, we want to avoid all of those things in a declarative/functional language. The goal is to write the solution in a single statement. What you are doing is mimicking a scratch tape in a 1950's tape file system.
Another non-declarative technique is to use UDFs, to mimic 1950's procedural code or OO style methods. Your sample code is full of COBOL-isms! In RDBMS we follow ISO-11179 rules, so we have “<something in particular>_code” rather than just “code” like
a field within a COBOL record. The hierarchical record structure provides context, but in RDBMS, data elements are global. Or better, they are universal names.
>> I am aware of the basic criteria like use SET NO COUNT ON, Order the table in which the indexes are created, Do not use SELECT * instead use only columns which are really required, CREATE INDEXes and all.<<
All good, but you missed others. Never use the same name for a data element (scalars) and a table (sets). Think about what things like “ProductionResponse.production_response” means. A set with one element is a bit weird, but that is what you said. Also, what
is this response? A code? A count? It lacks what we call an attribute property.
This was one of the flaws we inherited when ANSI standardized SQL and we should have fixed it. Oh well, too late now.
Never use NVARCHAR(MAX). Why do you need to put all of the Soto Zen sutras in Chinese Unicode? When you use over-sized data elements, you eventually get garbage data.
>> Other than these I am stuck with the usage of temp tables. There are some limitations where I can convert all the Temp table logic to CTE (I am not saying its not possible, I really do not have time to spend for the conversion). Any suggestions are
welcome.<<
Yes! This is how we do declarative/functional programming! Make the effort, so the optimizer can work, so you can use parallelism and so you can port your code out of T-SQL dialect.
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
in Sets / Trees and Hierarchies in SQL -
Temp tables using an interfaces in ci
what is the purpose of temp tables in ci while performing interfaces
any one explain me processingProcessing a CI messes up SQL cursors I recently discovered. A temp table or a rowset to retrieve the values you're looping over (if applicable of course) can help resolve this issue.
If you mean App Engine processing in general (as in post above), it mostly is to avoid looping per row and doing the same action over and over again, or to avoid having to put a lot of large tables into 1 query (you could update some of the fields in different steps), for instance support I want to update an amount for each person, then I could have a
In most cases less performant version:
DoSelect:
%Select(EMPLID)
SELECT EMPLID FROM your query
Followed by:
INSERT INTO My_Result
SELECT * FROM My_Table
Where emplid = %Bind(Emplid)
In most case better:
SQL step:
Insert into %Table(My_ExmplTMP) (EMPLID) SELECT EMPLID FROM your query
Followed by:
INSERT INTO My_Result
SELECT * FROM My_Table Tbl, %Table(My_ExmplTMP) Tmp
Where Tbl.emplid = TMP.Emplid -
Insert into some sort of temp table?
Hi there,
Not sure how to do this, I have three Two CTEs
with a as (select field1, field2, field3 from table1),
b as (select field1, field2, field3 from table2)
Now I would like to insert each result from the above CTE into some kind of table or something. I would not want to use a temp table cause I read that's not a good approach. I'm kind of new to the Oracle world. I'm coming from a SQL server background and the way I would do it in SQL server would something like creating a Table variable and inserting all the result sets into that table variable, hence avoiding the creation of a temp table with in Oracle its more like a permanent table I read.
Let me know what would be the best approach to accomplish this task here.
Thanks very much for your help.Hi,
Sorry, I don't understand what you want.
Whenever you have a problem, please post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) from all tables involved.
Also post the results you want from that data, and an explanation of how you get those results from that data, with specific examples.
Simplify the problem as much as possible. Remove all tables and columns that play no role in this problem.
If you're asking about a DML statement, such as UPDATE, the CREATE TABLE and INSERT statements should re-create the tables as they are before the DML, and the results will be the contents of the changed table(s) when everything is finished.
Always say which version of Oracle you're using.
Midway wrote:
Hi there,
Not sure how to do this, I have three Two CTEs
with a as (select field1, field2, field3 from table1),
b as (select field1, field2, field3 from table2)Why do you need CTEs? Why can't you just use table1 and table2?
Now I would like to insert each result from the above CTE into some kind of table or something. I would not want to use a temp table cause I read that's not a good approach. I'm kind of new to the Oracle world. I'm coming from a SQL server background and the way I would do it in SQL server would something like creating a Table variable and inserting all the result sets into that table variable, hence avoiding the creation of a temp table with in Oracle its more like a permanent table I read.
Let me know what would be the best approach to accomplish this task here.What is the task that you want to accomplish?
Is INSERTing really the goal, or is that a means you might use to accomplish your real task?
If all you want to do is generate a specific output, then I'm sure you don't need any other tables, temporary or otherwise. Exactly how to do it depends on what data is in your actual tables, and what results you want from that data. As long as I don't know where you're starting from, or where you want to go, I can't give you very good directions. -
Hi,
Instead of normal temporary table I am using global temporary tables for the report calculation.
Generally we use to insert timestamp in normal temp tables to avoid inconsistency of records in multi-user environment.
Similiary I want to get clarified whether we need to insert timestamp in global temporary tables or not ?
Thanks in advance.The metadata of global temporary tables are stored in the data dictionary and can be used from every session. The data are stored in the temporary tablespace for each session. that means that every session using the global temporary table has its own set of data and can't see or modify rows of another session. the data of an global temporary table are available until the next commit | rollback by default or until the session ends "create global temporary table ...... on commit preserve rows".
hope this helps
corrections welcome -
Multiple users accessing the same data in a global temp table
I have a global temp table (GTT) defined with 'on commit preserve rows'. This table is accessed via a web page using ASP.NET. The application was designed so that every one that accessed the web page could only see their data in the GTT.
We have just realized that the GTT doesn't appear to be empty as new web users use the application. I believe it has something to do with how ASP is connecting to the database. I only see one entry in the V$SESSION view even when multiple users are using the web page. I believe this single V$SESSION entry is causing only one GTT to be available at a time. Each user is inserting into / selecting out of the same GTT and their results are wrong.
I'm the back end Oracle developer at this place and I'm having difficulty translating this issue to the front end ASP team. When this web page is accessed, I need it to start a new session, not reuse an existing session. I want to keep the same connection, but just start a new session... Now I'm losing it.. Like I said, I'm the back end guy and all this web/connection/pooling front end stuff is magic to me.
The GTT isn't going to work unless we get new sessions. How do we do this?
Thanks!DGS wrote:
I have a global temp table (GTT) defined with 'on commit preserve rows'. This table is accessed via a web page using ASP.NET. The application was designed so that every one that accessed the web page could only see their data in the GTT.
We have just realized that the GTT doesn't appear to be empty as new web users use the application. I believe it has something to do with how ASP is connecting to the database. I only see one entry in the V$SESSION view even when multiple users are using the web page. I believe this single V$SESSION entry is causing only one GTT to be available at a time. Each user is inserting into / selecting out of the same GTT and their results are wrong.
I'm the back end Oracle developer at this place and I'm having difficulty translating this issue to the front end ASP team. When this web page is accessed, I need it to start a new session, not reuse an existing session. I want to keep the same connection, but just start a new session... Now I'm losing it.. Like I said, I'm the back end guy and all this web/connection/pooling front end stuff is magic to me.
The GTT isn't going to work unless we get new sessions. How do we do this?
Thanks!You may want to try changing your GTT to 'ON COMMIT DELETE ROWS' and have the .Net app use a transaction object.
We had a similar problem and I found help in the following thread:
Re: Global temp table problem w/ODP?
All the best. -
How to read the data file and write into the same file without a temp table
Hi,
I have a requirement as below:
We are running lockbox process for several business, but for a few businesses we have requirement where in we receive a flat file in different format other than how the transmission format is defined.
This is a 10.7 to 11.10 migration. In 10.7 the users are using a custom table into which they are first loading the raw data and writing a pl/sql validation on that and loading it into a new flat file and then running the lockbox process.
But in 11.10 we want to restrict using temp table how can we achieve this.
Can we read the file first and then do validations accordingly and then write to the same file and process the lockbox.
Any inputs are highly appreciated.
Thanks & Regards,
Lakshmi Kalyan Vara Prasad.Hello Gurus,
Let me tell you about my requirement clearly with an example.
Problem:
i am receiving a dat file from bank in below format
105A371273020563007 07030415509174REF3178503 001367423860020015E129045
in this detail 1 record starting from 38th character to next 15 characters is merchant reference number
REF3178503 --- REF denotes it as Sales Order
ACC denotes it as Customer No
INV denotes it as Transaction Number
based on this 15 characters......my validation comes.
If i see REF i need to pick that complete record and then fill that record with the SO details as per my system and then submit the file for lockbox processing.
In 10.7 they created a temporary table into which they are loading the data using a control file....once the data is loaded into the temporary table then they are doing a validation and updating the record exactly as required and then creating one another file and then submitting the file for lockbox processing.
Where as in 11.10 they want to bypass these temporary tables and writing it into a different file.
Can this be handled by writing a pl/sql procedure ??
My findings:
May be i am wrong.......but i think .......if we first get the data into ar_payments_interface_all table and then do the validations and then complete the lockbox process may help.
Any suggestions from Oracle GURUS is highly appreciated.
Thanks & Regards,
Lakshmi Kalyan Vara Prasad. -
Global Temp Table or Permanent Temp Tables
I have been doing research for a few weeks and trying to comfirm theories with bench tests concerning which is more performant... GTTs or permanent temp tables. I was curious as to what others felt on this topic.
I used FOR loops to test out the performance on inserting and at times with high number of rows the permanent temp table seemed to be much faster than the GTTs; contrary to many white papers and case studies that have read that GTTs are much faster.
All I did was FOR loops which iterated INSERT/VALUES up to 10 million records. And for 10 mil records, the permanent temp table was over 500k milliseconds faster...
Anyone have an useful tips or info that can help me determine which will be best in certain cases? The tables will be used for staging for ETL Batch processing into a Data Warehouse. Rows within my fact and detail tables can reach to the millions before being moved to archives. Thanks so much in advance.
-Tim> Do you have any specific experiences you would like to share?
I use both - GTTs and plain normal tables. The problem dictates the tools. :-)
I do have an exception though that does not use GTTs and still support "restartability".
I need to to continuously roll up (aggregate) data. Raw data collected for an hour gets aggregated into an hourly partition. Hourly partitions gets rolled up into a daily partition. Several billion rows are processed like this monthly.
The eventual method I've implemented is a cross between materialised views and GTTs. Instead of dropping or truncating the source partition and running an insert to repopulate it with the latest aggregated data, I wrote an API that allows you to give it the name of the destination table, the name of the partition to "refresh", and a SQL (that does the aggregation - kind of like the select part of a MV).
It creates a brand new staging table using a CTAS, inspects the partitioned table, slaps the same indexes on the staging table, and then performs a partition exchange to replace the stale contents of the partition with that of the freshly build staging table.
No expensive delete. No truncate that results in an empty and query-useless partition for several minutes while the data is refreshed.
And any number of these partition refreshes can run in parallel.
Why not use a GTT? Because they cannot be used in a partition exchange. And the cost of writing data into a GTT has to be weighed against the cost of using that data by writing it (or some of it) into permanent tables. Ideally one wants to plough through a data set once.
Oracle has a fairly rich feature set - and these can be employed in all kinds of ways to get the job done.
Maybe you are looking for
-
Airport Extreme - won't detect network!
Hi, I'm new to these parts so please be gentle with me! I've got an airport extreme card fitted to my ibook G4, and I've been using the card happily for about a month now. It's worked fine but last night it suddenly lost the network and however hard
-
Shared variable engine OPC delay
Hello All, I've got a bit of a problem with the delay time of updates between a non NI OPC server and a shared variable engine OPC client. I am using the redion OPC software OPCWorx with a database of around 120 tags to monitor and log data from a sm
-
Is splitting a cell now gone???
Such a basic function. There appears to be absolutely no way to split a cell in the new iWork.
-
I am querying an Oracle Rbd. I want to retrieve a list of Tables that exist in the database. I have used the following SQL statements without positive results: SELECT * FROM ALL_TABLES SELECT * FROM DBA_TABLES SELECT * FROM USER_TABLES Can someone he
-
Keep getting restart error, Keep getting restart error
I get the Mac version of the "Blue Screen of Death" every day now! It happens when I leave my computer for a few minutes and come back I find the error to press any key to start. It's really annoying and I have no idea what's causing it. It's happ