Problem with SQL to create a temp table to evaluate sort merge join

Can anyone help me with the below sql to enable the performance of the use merge join to be evaluated. This sql is placed in a file and ran from the prompt.
timing start
CREATE table testsql as
SELECT /*+ USE_MERGE(t1000 t8000)*/ t1000, t8000
FROM t1000, t8000
WHERE t1000.ONEPERCENT=1 AND t1000.unique1 = t8000.fk;
timing stop
drop table testsql;
Current error message:
SQL> start h:\exe1.sql
SELECT /*+ USE_MERGE(t1000 t8000)*/ t1000, t8000
ERROR at line 2:
ORA-00904: "T8000": invalid identifier
Elapsed: 00:00:00.01
drop table testsql
ERROR at line 1:
ORA-00942: table or view does not exist
Thanks for your help,
Regards, Chris.

SELECT /*+ USE_MERGE(t1000 t8000)*/ t1000, t8000
Are you sure you have columns t1000 and t8000 in one of your tables used in the from clause?
What is the idea of this exercise? what is it that you are trying to evaluate?

Similar Messages

  • PL/SQL to create a temp table that will be dropped after session ends

    Is it possible in PL/SQL to create a temp table that will be dropped after the session ends? Please provide example if possible. I can create a global temp table in PL/SQL but I am not sure how (if possible) to have it 'drop' once the session ends.
    DB: 10g
    OS: Wiindoze 2003 Server
    :-)

    As others have mentioned (but probably not clearly explained), Oracle treats temporary tables differently to SQL Server.
    In SQL Server you create a temporary table and it gets dropped (automatically I assume, I dont do SQL Server) after the session finishes. This will obviously allow each session to "request" a temporary table to use, then use it, and not have to worry about cleaning up the database after the session has finished.
    Oracle takes a different approach...
    On the assumption that each session is likely to be creating a temporary table for the same purposes, with the same structure, Oracle let's you create a Global Temporary Table a.k.a. GTT (which you've already come across). You only have to create this table once and you leave it on the database. This then means that any code written to use that table doesn't have to be dynamic code and can be verified and checked at compile time, just like code written for any other table. The difference of a GTT from a regular table is that any data you put into that table can only be seen by that session and will not interfere with any data of other sessions and, when you either commit, or end the session (depending on the "on commit delete rows" or "on commit preserve rows" option used when creating the GTT), that data from your own session will automatically be removed and hence the table is cleaned up that way, whilst the actual table itself remains.
    Some people from SQL Server backgrounds try and create and drop tables dynamically in their PL/SQL code, but this leads to problems...
    SQL> ed
    Wrote file afiedt.buf
      1  begin
      2    execute immediate 'create table my_temp (x number)';
      3    insert into my_temp values (1);
      4    execute immediate 'drop table my_temp';
      5* end;
    SQL> /
      insert into my_temp values (1);
    ERROR at line 3:
    ORA-06550: line 3, column 15:
    PL/SQL: ORA-00942: table or view does not exist
    ORA-06550: line 3, column 3:
    PL/SQL: SQL Statement ignoredi.e. the code will not compile for direct DML statements trying to use that table.
    They then try and get around this issue by making their DML statements dynamic too...
    SQL> ed
    Wrote file afiedt.buf
      1  create or replace procedure my_proc is
      2  begin
      3    execute immediate 'create table my_temp (x number)';
      4    execute immediate 'insert into my_temp values (''A'')';
      5    execute immediate 'drop table my_temp';
      6* end;
    SQL> /
    Procedure created.... which looks great and it compiles ok... but... when they try and run it...
    SQL> exec my_proc;
    BEGIN my_proc; END;
    ERROR at line 1:
    ORA-01722: invalid number
    ORA-06512: at "SCOTT.MY_PROC", line 4
    ORA-06512: at line 1... oops the code has a bug in it. Our DML statement was invalid.
    This is really something that would have been caught at compile time, if the statement had been a direct DML statement rather than dynamic. And thus we see the problem with people trying to write all their code as dynamic SQL... it's more likely to contain bugs that won't be detected at compile time and only come to light at run time... sometimes only under certain conditions and sometimes once it's got into a production environment. Bad Idea!!!! ;)
    Far better to never create tables (or most other database objects) at run time. Just create them once as part of the database design/implementation and use them as required, allowing you to catch the most common coding errors up front before they get anywhere near a test environment or worse still, a production environment.

  • How to create a temp table in the memory, not in disk?

    in sql server, you can create a temp table in the memory instead of disk,
    then you can do the insert, delete,update and select on it.
    after finishing, just release it.
    in Oracle,
    I am wonderfing how to create a temp table in the memory, not in disk?
    thanks,

    Thanks for rectifying me Howard.
    I just read your full article on this too and its very well explained here:
    http://www.dizwell.com/prod/node/357
    Few lines from your article
    It is true, of course, that since Version 8.0 Oracle has provided the ability to create a Keep Pool in the Buffer Cache, which certainly sounds like it can do the job... especially since that word 'keep' is used again. But a keep pool is merely a segregated part of the buffer cache, into which you direct blocks from particular tables (by creating them, or altering them, with the BUFFER POOL KEEP clause). So you can tuck the blocks from such tables out of the way, into their own part of the buffer cache... but that is not the same thing as guaranteeing they'll stay there. If you over-populate the Keep Pool, then its LRU mechanism will kick in and age its contents out just as efficiently as an unsegregated buffer cache would.
    Functionally, therefore, there can be no guarantees. The best you can do is create a sufficiently large Keep Pool, and then choose the tables that will use it with care such that they don’t swamp themselves, and start causing each other to age out back to disk.
    Thanks and Regards

  • HOW TO create a temp table or a record group at run time

    i have a a tabular form and i dont want to allow the user entering duplicate
    records while he is in insert mode and the inserted records are new and not exsisting in the database.
    so i want to know how to create a temp table or a record group at run time to hold the inserted valuse and compare if they are exsiting in previous rows or no.
    please help!

    As was stated above, there are better ways to do it. But if you still wish to create a temporary block to hold the inserted records, then you can do this:
    Create a non-database block with items that have the same data types as the database table. When the user creates a new record, insert the record in the non-database block. Then, before the commit, compare the records in the non-database block with those in the database block, one at a time, item by item. If the record is not a duplicate, copy the record to the database block. Commit the records, and delete the records in the non-database block.

  • When analytical queries create implicit temp tables?

    Hi,
    We have a DSS project on Oracle 9i Release 2 running on HP-UX. The project includes lots of SQL Analytical queries which handle huge data set, and makes lots of hash/merge joins and sorts. When I look at execution plans of the queries, I see that some of them create implicit temp tables and execute on it, but some not (creating window(buffer) ). I wonder how the optimizer decides to create implicit temp table when executing analytical queries. And also is three a SQL hint to force to create implicit temp table?
    Regards

    Hi,
    We have a DSS project on Oracle 9i Release 2 running on HP-UX. The project includes lots of SQL Analytical queries which handle huge data set, and makes lots of hash/merge joins and sorts. When I look at execution plans of the queries, I see that some of them create implicit temp tables and execute on it, but some not (creating window(buffer) ). I wonder how the optimizer decides to create implicit temp table when executing analytical queries. And also is three a SQL hint to force to create implicit temp table?
    Regards

  • I have the problem with the eFax Create Account button being grayed out for my newly purchased 6525.

    I have the problem with the eFax Create Account button being grayed out for my newly purchased 6525.
    Can you help?

    Hi Marv13, disregard my last post. The 6520 does have eFax capabilities and the support document I provided a link to needs updating. If you contact HP Cloud Services phone support they will be able to resolve this issue for you. The number and available times are posted below:
    HP Cloud Services
    1-855-785-2777
    Hours of Operation
    Monday-Friday 8am-11pm ET
    Saturday 10am-6:30pm ET
    Best.
    If my reply helped you, feel free to click on the Kudos button (hover over the "thumbs up").
    If my reply solved your problem please click on the Accepted Solution button so other Forum users may benefit from viewing the post.
    I am an HP employee.

  • Problem with SQL connection and a Collection

    hi all,
    I have two problems with sql...
    1. how can I assign the values of a resultset to a collection?
    2. how can I close the sql connection, because when I close the statement and connection error shows me in the resultset
    thanks!

    Hello Pablo,
    RetrivingResults In Collection:
    1)   use getObject method, and assign it to collection.
              Collection c_obj=new ArrayList();
             while(rs.next())
                    c_obj.add(rs.getInt(Project_ID), rs.getString(Project_Name));
    Closing ResultSet
    2)               The close() methos of ResultSet closes the ResultSet object, like bellow
                    ResultSet rs = stmt.executeQuery("SELECT a, b FROM TABLE2");
                    rs.close(); //Closes the result set

  • Problems with Click a row in the Table

    Dear All,
    I am new to Java GUI. I got some problems with clicking the row in a table.
    My table allows the user to delete some rows fromt the table. When the user click a row or some rows in my table, the delete button should be enabled. It does work in most of the time. But sometimes, even the row being hight lighted, the button is still not enabled.
    Do you have any idea for what can cause this problem?
    What I did is :
    1. add a mouse listener to the table
    _userTable.addMouseListener(new java.awt.event.MouseAdapter()
    public void mouseClicked(MouseEvent e)
    userList_mouseClicked(e);
    2. then, if the row being selected is > 0, I will enable my delete button.
    Can anybody help?
    Thanks!

    Dear All,
    I am getting problems with single selection in a table. Although I have set the selection model to single, if I select a row int the table and then depress Ctrl/Shift multiple rows are selected.
    The code for set single selection is :
    ListSelectionModel seleModel = myTable.getSelectionModel();
    seleModel.setSelectionMode(ListSelectionModel.SINGLE_SELECTION);
    Any idea about this? Is this a bug in JAVA?
    Thanks!

  • Trial version problem with the wrie Create with the trial version

    help me
    trial version problem with the write Create with the trial version
    i wanna delete the Create with the trail version

    Publication on businCata I am sorry I do not understand your inquiry.  Can you please restate your question?  What Adobe software or service is your inquiry in relation too?

  • Create local temp table

    I need to create a temp local table and look for ColdFusion informaiton, the cffile action write only can write text file, pictures is more to create file.
    I would like to know does ColdFusion support to create local temp tables on session start,
    If yes, should be able to get client temp directory and access the data using temp directory without using data source from ColdFusion server?
    Your help and information is great appreciated,
    Regards,
    Iccsi,

    Thanks for the information and help,
    I use jQuery combox to let user type selection from drop down box, but the table has more than 10,000 records which has performance issue. I would like to load to client machine to let user access locally to resolve performance issue using jQuery combo box,
    Thanks again for helping,
    Regards,
    Iccsi,

  • How to create the temp table in Sybase with values from Excel ?

    Dear Sir/Madam,
    I know this is not related with oracle DB, however i am in the need of help from you people who worked on Sybase. plz help me..
    I have the excel sheet which contains EmpId and EmpName values. I just wanted to read these two values one by one from an excel sheet and loaded them into one temp table in Sybase. is it possible thru sqlldr ?
    here is the sample code for your reference:
    begin tran
    create table tempdb..empIdName (emp_id int identity,emp_name varchar(50))
    go
    Awaiting for your valuable reply
    thanks
    pannar

    there is some hint provided by sybase user
    If it's Sybase IQ, you can export the excel file to a CSV file, then use a LOAD TABLE statement to get the data into your temporary table.
    For ASE, BCP would provide similar functionality.
    I am using ISQLW tool to work with sybase DB. what query to load the excel data to tem table in sybase. anyone who knows this? help me

  • Problem with SQL,udfs & procedures

    I have couple of problems with my database. Please suggest solution.
    We are basically a web product With a Quite large Database
    1. I am using functions both User Defined and Built in Functions in
    SQL Statement. I want to optimize the query how do i do it.
    why the usage of function in sql statements suppresses,the
    usage of indexes internally. How to forceable make use of
    the index even though function is used.
    2. Whenver The Client makes a request to the Database server with a
    Sql Query What are the steps we can take at the
    client side to enhance the performance of the Query.
    (i.e the Data Request ). How to optimize the usage of CPU at
    client site?
    3. what is the increase in the performance ration by having
    separate table spaces for user data,system data and indexes.
    4. Why the procedures are getting invalided
    after some time. The procedure is
    not getting executed at the front end.
    Once the procedure is getting invalidated.
    However even though the status of the
    procedure is invalid the same is getting
    executed at the back end.
    Can anybody help me
    Request for reply ASAP.
    Regards
    Koshal
    null

    1. In Oracle 8i, one can create function-based indexes, where instead of indexing a column, one can index upper() of that column.
    2. Optimizing client performance is trickier. One can tune the queries being
    submitted by the client, but if getting the first row back -- which is how response time is generally perceived -- set the OPTIMIZER_MODE parameter in init.ora to FIRST_ROWS.
    3. There is minimal benefit to having data, index, rollback, and temp tablespaces all separated unless all the datafiles for each tablespace reside on different disks (data on disk 1, index on disk 2, etc). It's recommended regardless, but unless the files are on separate volumes, there won't be a great performance benefit.
    4. A procedure is invalidated whenever DDL is issued against any object that that procedure depends upon. For example, if you add a column to a table, any procedures which reference that table will be invalidated. Any procedures which reference views which reference that table will be invalidated, because the view will be invalidated. It's best to run a compile script which looks for and attempts to recompile any invalid objects on a daily basis.
    Adam

  • Problem with SQL Insert

    Hi
    I am using Flex to insert data through remoteobject and using
    SAVE method generated from CFC Wizard.
    My VO contains the following variables where id has a default
    value of 0 :
    public var id:Number = 0 ;
    public var date:String = "";
    public var accountno:String = "";
    public var debit:Number = 0;
    public var credit:Number = 0;
    id is set as the primary key in my database.
    I have previously used MySQL which automatically generates a
    new id if I either omitted the id value in my insert statement or
    use 0 as the default value.
    However, now I am using MS SQL Server and the insert
    generated this error:
    Unable to invoke CFC - Error Executing Database Query.
    Detail:
    [Macromedia][SQLServer JDBC Driver][SQLServer]Cannot insert
    explicit value for identity column in table 'Transactions' when
    IDENTITY_INSERT is set to OFF.
    Strange thing is that I dont see this problem with another
    sample application. I compared with that application (it has the
    same VO with default value of 0 for id) and everything looks
    similar so I can't understand why I have this error in my
    application.
    This is SQL used to create the table:
    USE [Transactions]
    SET ANSI_NULLS ON
    SET QUOTED_IDENTIFIER ON
    SET ANSI_PADDING ON
    CREATE TABLE [dbo].[Transactions](
    [id] [int] IDENTITY(1,1) NOT NULL,
    [date] [datetime] NULL,
    [accountno] [varchar](100) NULL,
    [debit] [int] NULL,
    [credit] [int] NULL,
    CONSTRAINT [PK_Transactions] PRIMARY KEY CLUSTERED
    [id] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE =
    OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
    ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    Has anyone encountered this problem before with insert
    statements dealing with a unique id (with MS SQL Server)?

    Change "addRecords =
    connection.prepareStatement("
    to "addRecords =
    connection.preparedStatement("
    -BIGD

  • Error while creating Global temp table

    Hi,
    I am very new to PL/SQL so please excuse my question. I have the below query . I have to get a count between the source table and various target tables. I am creating a global temp table to store the counts. I am getting the below error for my following query :
    Thanks for the help,
    Petronas
    ----Query----
    set serveroutput on
    Declare
    nm1 varchar2(200);
    nm2 varchar2(200);
    cnt1 number;
    cnt2 number;
    diff number;
    totdiff number;
    Begin
    nm1 := null;
    nm2:= null;
    cnt1:= 0;
    cnt2 := 0;
    diff := 0;
    totdiff := 0;
    create GLOBAL TEMPORARY TABLE diff ( name1 varchar(200), name2 varchar2(200), diff number);
    select count(*) into cnt1
    from users_staging;
    select count(*) into cnt2
    from PROD.users;
    nm1 := 'users_staging';
    nm2 := 'PROD.users';
    diff := cnt1 - cnt2;
    insert into diff values (nm1,nm2,diff);
    select count(*) into totdiff
    from diff
    where diff> 0 ;
    dbms_output.enable(10000);
    dbms_output.put_line('# of tables where difference is > 0 ' ||totdiff);
    end;
    Encountered the symbol "CREATE" when expecting one of the following:
    begin case declare end exception exit for goto if loop mod
    null pragma raise return select update while with
    <an identifier> <a double-quoted delimited-identifier>
    <a bind variable> << close current delete fetch lock insert
    open rollback savepoint set sql execute commit forall merge
    pipe

    Hi,
    "CREATE GLOBAL TEPORARY TABLE ..." is not a PL/SQL command; it is a SQL command only.
    Create the table, using that statement, before running the PL/SQL block.
    You can issue SQL statements from within PL/SQL using the EXECUTE IMMEDIATE command, but this is rarely a good idea.
    I assume the PL/SQL code is meant to create the table and then populate it.
    You should split those into two separate pieces of code. You'll only want to create the table once, no matter how many times you use it. I assume you'll want to populate it the same way many times. Remember, the "TEMPORARY" in "GLOBAL TEMPORARY TABLE" refers to the data, not the table. When you end a transaction (or a session, depending on whther you want "ON COMMIT DELETE ROWS" or "ON COMMIT PRESERVE ROWS"), the data disappears, but the now-empty stays, ready to be populated for the next transaction (or session).
    Edited by: Frank Kulash on Aug 4, 2010 2:25 PM

  • Problem with SQL*Loader loading long description with carriage return

    I'm trying to load new items into mtl_system_items_interface via a concurrent
    program running the SQL*Loader. The load is erroring due to not finding a
    delimeter - I'm guessing it's having problems with the long_description.
    Here's my ctl file:
    LOAD
    INFILE 'create_prober_items.csv'
    INTO TABLE MTL_SYSTEM_ITEMS_INTERFACE
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
    (PROCESS_FLAG "TRIM(:PROCESS_FLAG)",
    SET_PROCESS_ID "TRIM(:SET_PROCESS_ID)",
    TRANSACTION_TYPE "TRIM(:TRANSACTION_TYPE)",
    ORGANIZATION_ID "TRIM(:ORGANIZATION_ID)",
    TEMPLATE_ID "TRIM(:TEMPLATE_ID)",
    SEGMENT1 "TRIM(:SEGMENT1)",
    SEGMENT2 "TRIM(:SEGMENT2)",
    DESCRIPTION "TRIM(:DESCRIPTION)",
    LONG_DESCRIPTION "TRIM(:LONG_DESCRIPTION)")
    Here's a sample record from the csv file:
    1,1,CREATE,0,546,03,B00-100289,PROBEHEAD PH100 COMPLETE/ VACUUM/COAX ,"- Linear
    X axis, Y,Z pivots
    - Movement range: X: 8mm, Y: 6mm, Z: 25mm
    - Probe tip pressure adjustable contact
    - Vacuum adapter
    - With shielded arm
    - Incl. separate miniature female HF plug
    The long_description has to appear as:
    - something
    - something
    It can't appear as:
    -something-something
    Here's the errors:
    Record 1: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column
    LONG_DESCRIPTION.
    Logical record ended - second enclosure character not present
    Record 2: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column
    ORGANIZATION_ID.
    Column not found before end of logical record (use TRAILING NULLCOLS)
    I've asked for help on the Metalink forum and was advised to add trailing nullcols to the ctl so the ctl line now looks like:
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
    I don't think this was right because now I'm getting:
    Record 1: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column LONG_DESCRIPTION.
    Logical record ended - second enclosure character not present
    Thanks for any help that may be offered.
    -Tracy

    LOAD
    INFILE 'create_prober_items.csv'
    CONTINUEIF LAST <> '"'
    INTO TABLE MTL_SYSTEM_ITEMS_INTERFACE
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
    (PROCESS_FLAG "TRIM(:PROCESS_FLAG)",
    SET_PROCESS_ID "TRIM(:SET_PROCESS_ID)",
    TRANSACTION_TYPE "TRIM(:TRANSACTION_TYPE)",
    ORGANIZATION_ID "TRIM(:ORGANIZATION_ID)",
    TEMPLATE_ID "TRIM(:TEMPLATE_ID)",
    SEGMENT1 "TRIM(:SEGMENT1)",
    SEGMENT2 "TRIM(:SEGMENT2)",
    DESCRIPTION "TRIM(:DESCRIPTION)",
    LONG_DESCRIPTION "REPLACE (TRIM(:LONG_DESCRIPTION), '-', CHR(10) || '-')")

Maybe you are looking for