Tuning temp usage - 21GB Table

Hello,
11gR2 OEL, 2 CPU machine (test box).
select id,avg(col1),avg(col2),avg(col3),avg(col4),avg(col5)
from t1
group by id;id - VARCHAR2(15) NOT NULL - Not Indexed
Avg row length - 100 bytes
Table t1 is 21GB in size. Degree 4.
The query fails with ORA-01652 even after consuming 35 GB of Temp.
Problem at hand is to reduce temp usage as DBAs are not Ok with such huge temp usage with other jobs running.
We have come up with 2 options.
1. Create Index on id column - Looks like this works but takes longer. Testing is ongoing.
2. Create Hash Partitions (10 partitions) hash by id column. Loop through each partition and compute avg.
Option 1 is being tested currently.
The question I have is, is Option 2 reliable. Is it guaranteed that a given value for id will always go to partition x ?
Meaning, is it possible the same id value is stored in more than one hash partition ?
Pls let me know.
Rgds,
Gokul

The table has about 270 million rows with 3 - 4 million unique ids.
Will Hash partitioning still help in this case ?With 4 million Unique IDs, you'd have to create a large number of HASH partitions to "reasonably" distribute them ! You could also use RANGE partitioning if you can predict the ID values.
But don't go and partition/re-partition a table just to suit one type of query. Partitioning can well impact all sorts of queries (and DML) against the table. So you need to review the pattern of activity, performance implications for each type of access and maintenance overheads before deciding on a partitioning scheme.
Hemant K Chitale

Similar Messages

  • How can we find the most usage and lowest usage of table in Sql Server by T-SQL

    how can we find the most usage and lowest usage of table in Sql Server by T-SQL
    The table has time stamp column
    StartedOn datetime
    EndedOn datetime

    The Below query has been used , but the textdata column doesnot include the name of the table ServiceLog.
    SELECT
    FROM
    databasename,
    duration
    fn_trace_gettable('F:\Program
    Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\log_148.trc',
    default)
    WHERE
    DATABASENAME='ZTCFUTURE'
    AND TEXTDATA
    IS
    NOT
    NULL
    --AND TEXTDATA LIKE 'SERVICE%'
    order
    by cpu
    desc; 

  • OBIEE Usage Tracking Table Is Not Populated

    Hello everybody,
    OBIEE Usage tracking table (s_nq_acct) is not populated correctly for last a few weeks. I have no experience on configuring / managing usage tracking jobs, so I do not know where to start to solve this problem. First of all, which log file should I go to find the problem? And also which parameters should I look? Can you help me please.
    Regards,
    Dilek

    Check that the absolutely correct and precise connection pool information contained in your RPD matches what your entries in the MBeans which then get written to the NQSConfig.ini.
    Check that your UT physical model in the RPD physical layer isn't and old (11.1.1.5) setup running on a 11.1.1.7 env where the column definitions don't match.
    Several log files. NQServer.log and NQQuery.log to begin with.

  • OBIEE 11G - Usage Tracking - Table S_NQ_DB_ACCT

    Hi
    I successfully set up usage tracking in Obiee 11 and the table S_NQ_ACCT gets a lot of new records. Basically it's working.
    In the nqserver.log I get some errors concerning a new table S_NQ_DB_ACCT:
    [nQSError: 17001] Oracle Error code: 942, message: ORA-00942: table or view does not exist
    at OCI call OCIStmtExecute: INSERT INTO S_NQ_DB_ACCT (ID,LOGICAL_QUERY_ID,QUERY_TEXT,QUERY_BLOB,TIME_SEC,ROW_COUNT,START_TS,START_DT,START_HOUR_MIN,END_TS,END_DT,END_HOUR_MIN) VALUES (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12).
    This is an additional Usage Tracking Table which didn't exist in OBIEE 10G.
    I configured the table as S_NQ_ACCT in the same place in the repositry/connection pool. Writing to this table should be possible.
    Does anybody has an idea what could be wrong? Does another user fill data to this table than the user which fills data to S_NQ_ACCT?
    Thank you

    Hi
    Thank you.
    I use the table within the default RCU created schema. What I don't understand ist why the table S_NQ_ACCT gets a lot of records (because I set the loglevel) but S_NQ_DB_ACCT keeps empty with the error in the nqserver.log. I mean if one table works then the other should be working too? I checked also the permissions which are the same for both tables.
    [2012-06-04T15:14:39.000+00:00] [OracleBIServerComponent] [ERROR:1] [] [] [ecid: 7d4f9e9d968c0bfe:-39c197c9:13773b9d89f:-8000-000000000005c768] [tid: 18e0] [nQSError: 17011] SQL statement execution failed. [[
    [nQSError: 17001] Oracle Error code: 942, message: ORA-00942: table or view does not exist
    at OCI call OCIStmtExecute: INSERT INTO S_NQ_DB_ACCT (ID,LOGICAL_QUERY_ID,QUERY_TEXT,QUERY_BLOB,TIME_SEC,ROW_COUNT,START_TS,START_DT,START_HOUR_MIN,END_TS,END_DT,END_HOUR_MIN) VALUES (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12).
    ]]

  • [59053] Usage Tracking stopped because the specified Usage Tracking table

    Immideatly after server startup, I am gettting this log in the NQServer.log.
    [59053] Usage Tracking stopped because the specified Usage Tracking table contained the wrong number of columns or a column with an inappropriate data type.
    Any idea how I can find out which table needs update?
    Thanks
    -SJ
    Here are the details of hte tables I have in the Usage tracking schema :
    CREATE TABLE "RMI"."S_NQ_ACCT"
    (     "USER_NAME" VARCHAR2(128 BYTE),
         "REPOSITORY_NAME" VARCHAR2(128 BYTE),
         "SUBJECT_AREA_NAME" VARCHAR2(128 BYTE),
         "NODE_ID" VARCHAR2(15 BYTE),
         "START_TS" TIMESTAMP (6),
         "START_DT" TIMESTAMP (6),
         "START_HOUR_MIN" CHAR(5 BYTE),
         "END_TS" TIMESTAMP (6),
         "END_DT" TIMESTAMP (6),
         "END_HOUR_MIN" CHAR(5 BYTE),
         "QUERY_TEXT" VARCHAR2(1024 BYTE),
         "SUCCESS_FLG" NUMBER(10,0),
         "ROW_COUNT" NUMBER(10,0),
         "TOTAL_TIME_SEC" NUMBER(10,0),
         "COMPILE_TIME_SEC" NUMBER(10,0),
         "NUM_DB_QUERY" NUMBER(10,0),
         "CUM_DB_TIME_SEC" NUMBER(10,0),
         "CUM_NUM_DB_ROW" NUMBER(10,0),
         "CACHE_IND_FLG" CHAR(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
         "QUERY_SRC_CD" VARCHAR2(30 BYTE) DEFAULT '',
         "SAW_SRC_PATH" VARCHAR2(250 BYTE) DEFAULT '',
         "SAW_DASHBOARD" VARCHAR2(150 BYTE) DEFAULT '',
         "SAW_DASHBOARD_PG" VARCHAR2(150 BYTE) DEFAULT '',
         "PRESENTATION_NAME" VARCHAR2(128 BYTE) DEFAULT '',
         "ERROR_TEXT" VARCHAR2(250 BYTE) DEFAULT '',
         "RUNAS_USER_NAME" VARCHAR2(128 BYTE) DEFAULT '',
         "NUM_CACHE_INSERTED" NUMBER(10,0) DEFAULT NULL,
         "NUM_CACHE_HITS" NUMBER(10,0) DEFAULT NULL
    ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "RMI_DATA" ;
    CREATE INDEX "RMI"."S_NQ_ACCT_M1" ON "RMI"."S_NQ_ACCT" ("START_DT", "START_HOUR_MIN", "USER_NAME")
    PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "RMI_DATA" ;
    CREATE INDEX "RMI"."S_NQ_ACCT_M2" ON "RMI"."S_NQ_ACCT" ("START_HOUR_MIN", "USER_NAME")
    PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "RMI_DATA" ;
    CREATE INDEX "RMI"."S_NQ_ACCT_M3" ON "RMI"."S_NQ_ACCT" ("USER_NAME")
    PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "RMI_DATA" ;
    CREATE OR REPLACE FORCE VIEW "RMI"."NQ_LOGIN_GROUP" ("LOGIN", "RESP") AS
    Select DISTINCT USER_NAME as "LOGIN", RUNAS_USER_NAME as RESP From S_NQ_ACCT
    Edited by: SachinJ on Aug 3, 2009 7:54 AM

    Compare the S_NQ_ACCT table you created with the table structure defined in the rpd under the physical layer database object "Oracle Analytics Usage".

  • UNDO and TEMP usage by a schema

    Hi,
    How can I findout UNDO and TEMP space usage by a schema? do we have any tables for this?
    If I want to get UNDO,TEMP space or any other resource used by a schema for 24 Hours period,can get this info.
    Can you please suggest the procedure to know the high resource consumption application?
    I am using Oracle 9.2.0.4 in SUN cluster environment.
    Thanks very much in advance.I apreciate your help
    Thanks
    Thanks

    Hi,
    About UNDO, you can check the status of the undo segment currently used by active transactions.
    select s.username, t.xidusn, t.ubafil, t.ubablk, t.used_ublk
    from v$session s, v$transaction t
    where s.saddr = t.ses_addr;About others user session's information, you can try this below:
    select osuser,
           machine,
           username,
           segment_name,
           sa.sql_text
    from   v$session s,
           v$transaction t,
           dba_rollback_segs r,
           v$sqlarea sa
    where  s.taddr = t.addr
    and    t.xidusn = r.segment_id(+)
    and    s.sql_address = sa.address(+)
    order by osuserAbout TEMP:
    To monitor tempspace eating by sessionSELECT a.sid,b.BLOCKS FROM v$session a , v$sort_usage b
    WHERE a.saddr=b.SESSION_ADDR;
    To monitor tempspace eating by SQLSELECT a.sql_text,b.BLOCKS FROM v$sql a , v$sort_usage b
    WHERE a.sql_id=b.sql_id;Cheers

  • Query to find the temp usage

    oracle 10.2.0.4 on win2008
    I got two queries from google to find the free space in temp tablespace. but both are showing different results.
    Please let me know which one is showing the correct space usage in temp tablespace.
    query 1 :
    SELECT A.tablespace_name tablespace, D.mb_total,
    SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
    D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
    FROM v$sort_segment A,
    SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
    FROM v$tablespace B, v$tempfile C
    WHERE B.ts#= C.ts#
    GROUP BY B.name, C.block_size
    ) D
    WHERE A.tablespace_name = D.name
    GROUP by A.tablespace_name, D.mb_total;
    query 2 :
    select tablespace_name,sum(bytes_used/1024/1024),sum(bytes_free/1024/1024) from v$temp_space_header group by tablespace_name;
    Edited by: %bala% on Apr 30, 2013 8:31 PM

    %bala% wrote:
    ERROR:
    ORA-04043: object DBA_TEMP_FREE_SPACE does not exist
    This table is available from 11g.yes, V11 is only supported version now.

  • Estimate temp usage

    Hello all,
    we have a quite complex query (based on several views) that works fine in a test environment but fails in the production environment because the temp segment is too small (ORA-01652).
    Fixing this problem is quite easy obviously by increasing the size of the tablespace. We'd like to find an "optimal" size for this - big enough, but not wasting space. So I would like to estimate the (current) usage of temp space of the query in some way.
    Is there any way of finding out the correct size without an incremental increase of the tablespace (add 100MB, try again, if it fails add another 100MB and so on).
    Would the "BYTES" column of an explain plan indicate the "correct" size for the temp segment?
    Kind regards
    Thomas

    Thomas, first you should run an explain plan and see why the temp segment space is used: order by, group by, hash area, etc....
    It is very possible for one statement to require multiple three temp areas concurrently.
    You may want to compare the test and production plans to be sure the query does not need tuning, that an index added to test exists in production, etc....
    Then based on the number of rows you expect to be passed to the temp area you can estimate how much space is required to support the query.
    HTH -- Mark D Powell --

  • Performance tuning -index on big table on date column

    Hi,
    I am working on Oracle 10g with Oracle Apps 11i on Sun.
    we have a large non-partition table "GL_JE_HEADERS" with 7 million rows.
    Now we want to run the query for selecting rows using between clause on date column.
    I have created Btree index on the this table.
    Now how can I tune the query? Which hint should I use for the query?
    Thanks,
    rane

    Hi Rane,
    Now how can I tune the query?Indexes on DATE datatypes are tricky, as the SQL queries must match the index!
    For example, an index on ship_date would NOT match a query:
    WHERE trunc(ship_date) > trunc(sysdate-7);
    WHERE to_char(ship_date,’YYYY-MM-DD’) = ‘2004-01-04’;
    You may need to create an function-basd index, so that the DATE reference in your SQL matches the index:
    http://www.dba-oracle.com/oracle_tips_index_scan_fbi_sql.htm
    To start testing, go into SQL*Plus and "set autotrace on" and run the queries.
    Then confirm that your index is being used.
    Which hint should I use for the query?Hints are a last resort!
    Your query is fully tuned when it fetches the rows you need with a minimum of block touches (logical reads, consistent gets).
    See here for details:
    http://www.dba-oracle.com/art_sql_tune.htm
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

  • Performance Tuning Query on Large Tables

    Hi All,
    I am new to the forums and have a very specic use case which requires performance tuning, but there are some limitations on what changes I am actualy able to make to the underlying data. Essentially I have two tables which contain what should be identical data, but for reasons of a less than optimal operational nature, the datasets are different in a number of ways.
    Essentially I am querying call record detail data. Table 1 (refered to in my test code as TIME_TEST) is what I want to consider the master data, or the "ultimate truth" if you will. Table one contains the CALLED_NUMBER which is always in a consistent format. It also contains the CALLED_DATE_TIME and DURATION (in seconds).
    Table 2 (TIME_TEST_COMPARE) is a reconciliation table taken from a different source but there is no consistent unique identifiers or PK-FK relations. This table contains a wide array of differing CALLED_NUMBER formats, hugely different to that in the master table. There is also scope that the time stamp may be out by up to 30 seconds, crazy I know, but that's just the way it is and I have no control over the source of this data. Finally the duration (in seconds) can be out by up to 5 seconds +/-.
    I want to create a join returning all of the master data and matching the master table to the reconciliation table on CALLED_NUMBER / CALL_DATE_TIME / DURATION. I have written the query which works from a logi perspective but it performs very badly (master table = 200,000 records, rec table = 6,000,000+ records). I am able to add partitions (currently the tables are partitioned by month of CALL_DATE_TIME) and can also apply indexes. I cannot make any changes at this time to the ETL process loading the data into these tables.
    I paste below the create table and insert scripts to recreate my scenario & the query that I am using. Any practical suggestions for query / table optimisation would be greatly appreciated.
    Kind regards
    Mike
    -------------- NOTE: ALL DATA HAS BEEN DE-SENSITISED
    /* --- CODE TO CREATE AND POPULATE TEST TABLES ---- */
    --CREATE MAIN "TIME_TEST" TABLE: THIS TABLE HOLDS CALLED NUMBERS IN A SPECIFIED/PRE-DEFINED FORMAT
    CREATE TABLE TIME_TEST ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                            CALLED_DATE_TIME DATE, DURATION NUMBER );
    COMMIT;
    -- CREATE THE COMPARISON TABLE "TIME_TEST_COMPARE": THIS TABLE HOLDS WHAT SHOULD BE (BUT ISN'T) IDENTICAL CALL DATA.
    -- THE DATA CONTAINS DIFFERING NUMBER FORMATS, SLIGHTLY DIFFERENT CALL TIMES (ALLOW +/-60 SECONDS - THIS IS FOR A GOOD, ALBEIT UNHELPFUL, REASON)
    -- AND DURATIONS (ALLOW +/- 5 SECS)                                        
    CREATE TABLE TIME_TEST_COMPARE ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                       CALLED_DATE_TIME DATE, DURATION NUMBER )                                        
    COMMIT;
    --CREATE INSERT DATA FOR THE MAIN TEST TIME TABLE
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 202);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 08:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 19);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 07:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 35);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 09:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 30);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:18:47 AM', 'MM/DD/YYYY HH:MI:SS AM'), 6);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:20:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 20);
    COMMIT;
    -- CREATE INSERT DATA FOR THE TABLE WHICH NEEDS TO BE COMPARED:
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:51 AM', 'MM/DD/YYYY HH:MI:SS AM'), 200);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '00447721345675', TO_DATE( '11/09/2011 08:10:59 AM', 'MM/DD/YYYY HH:MI:SS AM'), 21);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '07721345675', TO_DATE( '11/09/2011 07:11:20 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675', TO_DATE( '11/09/2011 09:10:01 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675#181345', TO_DATE( '11/09/2011 06:18:35 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 6);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '004477213456759777799', TO_DATE( '11/09/2011 06:19:58 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 17);
    COMMIT;
    /* --- QUERY TO UNDERTAKE MATCHING WHICH REQUIRES OPTIMISATION --------- */
    SELECT MAIN.CALLED_NUMBER AS MAIN_CALLED_NUMBER, MAIN.CALLED_DATE_TIME AS MAIN_CALL_DATE_TIME, MAIN.DURATION AS MAIN_DURATION,
         COMPARE.CALLED_NUMBER AS COMPARE_CALLED_NUMBER,COMPARE.CALLED_DATE_TIME AS COMPARE_CALLED_DATE_TIME,
         COMPARE.DURATION COMPARE_DURATION     
    FROM
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST
    ) MAIN
    LEFT JOIN
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST_COMPARE
    ) COMPARE
    ON INSTR(COMPARE.CALLED_NUMBER,MAIN.CALLED_NUMBER)<> 0
    AND MAIN.CALLED_DATE_TIME BETWEEN COMPARE.CALLED_DATE_TIME-(60/86400) AND COMPARE.CALLED_DATE_TIME+(60/86400)
    AND MAIN.DURATION BETWEEN MAIN.DURATION-(5/86400) AND MAIN.DURATION+(5/86400);

    What does your execution plan look like?

  • Disk space usage by table

    Hi all,
    how do I determine disk space usage by table1, table2 ?

    marco wrote:
    Hi all,
    how do I determine disk space usage by table1, table2 ?Use the _SEGMENTS views for this. Make sure to include dependent objects such as indexes if you're wanting to get an idea of the "total" size of the table. Perhaps you could give us more information on your requirements and what you're seeking to accomplish.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • TEMP usage in Parallel Query  |  Why can't it use the spare RAM?

    Hi All
    I have a query running with parallel degree 8 on Red Hat 64 bit, 10.2.0.4.0, with 16 CPUs and 32GB of RAM.
    My PGA target is 28 GB, SGA target at 2 GB.
    I am using this Oracle instance purely for some datamart ETL, in order to deliver data to a separate reporting database. No other clients hit the instance.
    I watch the instance during the query. Depending on volumes, I will sometimes get TEMP memory spill. This is not huge - maybe 2 to 4 GB of use, but even so, I don't want it to spill ,as the duration of the batch job is bigger than it needs to be.
    I've tried increasing pgamax_size to use more RAM, and this helps to a point. However, there are some Oracle limits, after which TEMP space comes back into play. I've also tried tinkering with the smmmax_size and smmpx_max_size, but cannot seem to get any improvement. Changing the degree of parallelism makes little difference.
    Looking at the Linux config, I have reduced swappiness to prevent the Oracle processes from being swapped out. And I can observe that only 4 to 6 GB of actual RAM is being used. I still have 20 - 24 GB completely free.
    So - how do I get Oracle to actually consume this RAM rather than using TEMP space?
    One option is to configure a RAM DISK of 8GB, and build the TEMP tablespace in that location. But is this really the answer? It seems like a 'fudge' rather than a solution.
    The query plan is tuned well, and seems optimal.
    Any help would be appreciated - thanks, Ankle.

    MaskedAnkle wrote:
    The query plan is tuned well, and seems optimal.
    How about showing us the execution plan; and the output from v$pq_tqstat after running the query would be helpful.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Usage Tracking table column definition

    Hi Gurus,
    I am not able to visualize the difference / significance of these two columns in S_NQ_ACCT table for usage tracking:
    NUM_CACHE_HITS - {Indicates the number of times existing cache was returned.}
    NUM_CACHE_INSERTED - {Indicates the number of times query generated cache was returned.}
    i got NUM_CACHE_HITS which gives the number of times cache match has occured, but what is role / meaning of NUM_CACHE_INSERTED, the documentation says "Indicates the number of times query generated cache was returned" what is query generated cache?
    Thanks,
    Sri

    hi jups,
    By deafult that parameter wil be null ,but any query got cached then it updates its value this way num_cache_inserted = 1
    Will this help you Definitions of time-related fields of usage tracking data
    hope helps you.
    Cheers,
    KK

  • Question about PGA and TEMP usage

    hi,
    we had a situation whereby OEM was reporting that because the PGA was too small during a time frame that extra I/O on the TEMP was being created. Does this then mean that the TEMP was acting as a pseudo PGA.
    regards

    Have a look at these threads
    Re: PGA memory problem - Oracle 10.2.0.4 on windows 2003
    Re: PGA Memory Usage Details
    <br>
    Oracle Database FAQs
    </br>

  • Z- T code usage log table.

    hi All,
    I am looking for T-code or program usage log standard table to fulfill my requirement.
    Like I can get these values username, run date, run time etc.
    -Thanks
      Amit

    Hello Amit,
    although it was not specifically meant for this purpose, transaction STAD is an alternative how you can check the usage of a Z-transaction.
    Usage of transaction STAD is:
    enter the starting date / time
    enter the length which indicates how long the server should be analysed from the starting date / time
    in field "Transaction" enter the transaction name you need to check
    simply press <enter>
    Transaction STAD contains a huge amount of information about program statistics, so the information won't be kept here for long. However, the amount of storable program statistics should be raised only with care, as it can give a huge additional load on the server.
    Best regards,
    Laszlo

Maybe you are looking for

  • HP Officejet Pro L7590 won't print correctly after it sits a few days Windows Vista USB connection

    I have a HP Officejet Pro L7590 All In One.  It all of a sudden will not print correctly after it sits a few days.  I purchased a new black/yellow printhead recently and it did print well for about a week.  If it sits a few days, it acts up again.  A

  • How to integrate SAP R/3 to SAP CRM

    Hi, AnyOne can tell me , how to integerate SAP R/3 to SAP CRM..

  • Arraycollection to database

    I have an arraycollection that is populated from a webservice. Users can edit all fields and possible every row in teh database will need to be modified. I'm trying to find the best way to get this data back to coldfusion to modify several database r

  • How to create an external url link with my pdf in acrobat pro

    I have no pb with adobe non pro to create an external web link which is connected to my pdf ( but internal link inside my pdf doesn't work ! ) , but I can't find in adobe pro, and further more I need  to have inside my pdf  another link  which must b

  • Preview as default

    I have recently downloaded Adope Reader and it has taken over as default from Preview. how do i reset preview as default reader?