Automatic table partitioning in Oracle 11g

Hi All,
I need to implement automatic table partitioning in Oracle 11g version, but partitioning interval should be on daily basis(For every day).
I was able to perform this for Monthly and Yearly but not on daily basis.
create table part
(a date)PARTITION BY RANGE (a)
INTERVAL (NUMTOYMINTERVAL(1,'*MONTH*'))
(partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
Table created
create table part
(a date)PARTITION BY RANGE (a)
INTERVAL (NUMTOYMINTERVAL(1,'*YEAR*'))
(partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
Table createdBut if i use DD or DAY instead of YEAR or MONTH it fails......Please suggest me how to perform this on daily basis.
SQL>
  1  create table part
  2  (a date)PARTITION BY RANGE (a)
  3  INTERVAL (NUMTOYMINTERVAL(1,'*DAY*'))
  4  (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
  5* )
SQL> /
INTERVAL (NUMTOYMINTERVAL(1,'DAY'))
ERROR at line 3:
ORA-14752: Interval expression is not a constant of the correct type
SQL> create table part
(a date)PARTITION BY RANGE (a)
INTERVAL (NUMTOYMINTERVAL(1,'*DD*'))
(partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
);  2    3    4    5
INTERVAL (NUMTOYMINTERVAL(1,'DD'))
ERROR at line 3:
ORA-14752: Interval expression is not a constant of the correct typePlease suggest me to resolve this ORA-14752 error for using DAY or DD or HH24
-Yasser

Yes, for differenct partitions for different months.
interval (numtoyminterval(1,'MONTH'))
store in (TS1,TS2,TS3)
This code will store data in partitions in tablespaces TS1, TS2, and TS3 in a round robin manner.
for Day wise day yes you can store
INTERVAL (NUMTODSINTERVAL(1,'day')) or
INTERVAL (NUMTODSINTERVAL(2,'day')) or
INTERVAL (NUMTODSINTERVAL(3,'day')) or
INTERVAL (NUMTODSINTERVAL(4,'day')) or
INTERVAL (NUMTODSINTERVAL(5,'day')) or
INTERVAL (NUMTODSINTERVAL(n,'day'))

Similar Messages

  • Oracle SQL Developer 1.2 – Automatic -  is compatible with Oracle 11g?

    Oracle SQL Developer 1.2 – Automatic - is compatible with Oracle 11g?
    Thanks in advance!
    -Babu

    So I'm taking it the question is -
    Is 1.2 Sql Developer compatible with 11g DB?
    The short answer is Yes. We are constantly adding new functionality and better support for the newer Databases. For Example to use the real time Sql monitoring that became available in 11g you will have to be on at least 2.1 Sql Dev. Is there some reason you don't want to upgrade to the latest Sql Dev? If not the current EA version then at least the current production version?
    Thanks,
    Syme

  • Table Partitioning in Oracle 8i

    Can tables be partitioned in Oracle 8i Standard Edition, or do I have to have the Enterprise Edition? I'm trying to deal with backup of a large database, and want to consider performinig partial backups to save time/resources. Thanks.

    Hi,
    You need entreprise edition for that.
    no other way

  • Data automatically become null in Oracle 11g DB

    We setup an Oracle 11g database for our application. Yesterday I have noted that in a table Contract_Owner the effective date for the owner FQYX1 is 30-Oct-12. But all of a sudden it has become NULL today. Im not sure how it has changed from 30-Oct-12 to NULL.
    Any ideas/thoughts will be appreciated greatly ....
    Edited by: 959598 on Apr 18, 2013 2:03 AM

    Oracle wouldn't change a value to a NULL unless it is told to do so.
    It could have been a user , a developer, a power user / super user.
    It could have been application code.
    It could have been a trigger.
    It could have been a mistakenly-written update.
    It could have been a scheduled job or a one-off job.
    Hemant K Chitale

  • Table Partitioning in Oracle 9i

    Hi all,
    I have a question on partitioning in Oracle 9i.
    I have a parent table with primary key A1 and attribute A2. A2 is not a primary key but I would to create partition for the table based on this attribute. I have a child table with attribute B1 being a foreign key to A1.
    I wish to perform a data purging on the parent and child table. I'll purge the parent table based on A2, but for the child table, it will be inefficient if I delete all records in child table where parent.A1 = child.B1. Should I add a new attribute A2 to the child table, partition the child table based on this attribute or is there a better way to do it?
    Thanks in advance for all replies.
    Cheers,
    Bernard

    Bernard
    Right 100K in the parent...but how many in the child ?
    I guess it comes back to what I said earlier...you can either take the hit on the cascaded delete to get out the records on the child table or you can denormalise the column down onto the child table in order to partition by it.
    I'm building a Data Warehouse currently and we're using the denormalise approach on a couple of tables in order to allow them to be equipartitioned and enable easier partition management and DML operations as you've indicated....but our tables have 100's of millions of rows in them so we really need to do that for manageability.
    100K records in the parent - provided the ratio to the child is not such that on average each deleted parent has 100's of children is probably not too onerous, especially for a monthly batch process - the question there would be how much time do you have to do this at the end of the month ? I'd probably suggest you set up a quick test and benchmark it with say 10K records as a representative sample (can do all 100K if you have time/space) - then assess that load/time against your month end window....if its reasonably quick then no need to compromise your design.
    You should also consider whether the 100K is going to remain consistent over time or is it going to grow rapidly in which that would sway you towards adding the denormalisation for partitioning approach at the outset.
    HTH
    Jeff

  • Existing table compression in oracle 11g

    Hi,
    We have a achema of 45gb in oracle 11g and need to compress the tables as it is rarely used,
    please can you tell me what are the option available to compress the tables.
    Thanks

    Thanks for the update.
    I was able to compress the other tables which are very less in size and i am able to find one lobsegment which is 37GB
    so how i can use compression on the LOBSEGMENT as i could not found any document for this
    i found one document in metalink says that
    "To achieve LOB compression, you need to specify LOB column storage as SECUREFILE.  With BASICFILE option, you can not use COMPRESSION for LOB column"
    SQL> select owner, segment_name,segment_type , bytes/1024/1024 MB from dba_segments where owner='WEBSPR' order by 4 desc ;
    OWNER                     SEGMENT_NAME              SEGMENT_TYPE                      MB
    WEBSPR                    SYS_LOB0000012869C00002$$ LOBSEGMENT                     37760
    and my existing lobsegment metadata is like below
    LOB ("DATA") STORE AS (
      TABLESPACE "WEBSPR_DATA" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
      NOCACHE LOGGING
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))
    and my lob column is either SECUREFILE nor BASICFILE so is there any other method to compress the existing LOBSEGMENT
    Appreciated for the inputs.Thanks

  • Locating user tables in an Oracle 11g database

    Excuse my ignorance on this subject
    But our company has an Oracle 11g database that drives one of our business applications. I am not an oracle admin and there is very little documentation on the application itself, however the application seems to have its own set of explicit login (username and password) credentials so I am guessing they are hashed somewhere in the database tables.
    My question would be – are there any default oracle tables where user credentials would typically be? or tips on tracking down where the password hashes may be? Or can this differ from application to application? Any tips welcome. Apologies for the naivity of the question. My goal is to identify which database accounts can query the table the hashes are in, as we have some users who can access the database for data analysis purposes - but I dont want them to have access to the table.

    user599292 wrote:
    EdStevens wrote:
    user599292 wrote:
    Excuse my ignorance on this subject
    But our company has an Oracle 11g database that drives one of our business applications. I am not an oracle admin and there is very little documentation on the application itself, however the application seems to have its own set of explicit login (username and password) credentials so I am guessing they are hashed somewhere in the database tables.
    My question would be – are there any default oracle tables where user credentials would typically be? or tips on tracking down where the password hashes may be? Or can this differ from application to application? Any tips welcome. Apologies for the naivity of the question. My goal is to identify which database accounts can query the table the hashes are in, as we have some users who can access the database for data analysis purposes - but I dont want them to have access to the table.The information relative to the user accounts is revealed in the view DBA_USERS, which normal users should not have a need to see. However, the passwords are stored in a true hash. It cannot be used directly, and cannot be reversed. So being able to see the hashed password does not in itself constitute a security risk.
    When a user is being authenticated, the procedure that oracle uses is NOT to 'decrypt' the stored password to see if it matches the password presented by the user. Rather, the password presented by the user is hashed, and that hash value is compared against the stored value.My concern was if they could extract those password hash values, there are many free password crackers where if they run dictionary values against those hash values and if any match they then have some passwords to gain perhaps elevated access in the application.Such a method would have to assume a password, know how oracle 'salts' the password, hash the result, then compare to the hashed values from the table. If you employ even a modicum of password complexity enforcement, I doubt that your developers are going to have access to the kind of computing capacity that would be required to get a positive result within your lifetime.
    You need to do three things
    First and foremost, adhere to the principle of 'least privilege'. Do not grant a user account any privileges that are not required for that account to complete it's business task. That includes access to any tables or views. Be wary of any "--ANY---" privileges.
    Second, use the password complexity function to enforce a reasonable level of password complexity.
    Third, Set the user's profile to expire the pasword after 'x' number of days and prevent the reuse of a password until after 'y' iterations.

  • Table Management in oracle 11g

    I am using 11g database, I have to release some space tablespace level.
    Here this is the situation.
    One of the big table (CAMPAIGN_REPORT_RAW), i guess more fragmenation is there in that table.
    It is created under INCIH_DATA tablespace (having 10 datafiles).INCHIH_DATA tablespace occupying 33 GB , in that 25 GB used space 8 GB freespace. We are using filesystem management.
    In that respective filesystem we have to get 10 GB space for creating new tablespace. Unfortunately we dont have that much space in that filesystem, For that my plan is shrink the CAMPAIGN_REPORT_RAW table, resize the datafile!!
    ALTER TABLE CAMPAIGN_REPORT_RAW SHRINK SPACE;
    ALTER TABLE CAMPAIGN_REPORT_RAW SHRINK SPACE CASCADE;
    alter database datafile '<full_file_name>' resize <size>M;
    for that
    I need your help to get the command for
    1) how to find the size of the table
    2) how to find the used size of the table
    3) how to find the Hight Water Mark level of the table
    4) how to find this table occupying which datafile
    Thanks

    Take a guideline from http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/schema003.htm#ADMIN10161
    Size of a table can be seen by,
    select * from dba_segments where segment_name='<your table name>';
    Above query is the 'used' size of the table. Minimum unit of allocating space in Oracle is an 'extent'. So, even if 90% of the extent is empty, you cannot reclaim those blocks, so there is no point in finding empty/used blocks of data.
    You can only find out which tablespace the table belongs to. We cannot find out which 'datafile' the table is in and we do not need to know that as well. Tell me why you want to know it?

  • Use or not to use table compression in Oracle 11g (11.2)?

    Hi All,
    I was trying to explore the difference between COMPRESS FOR ALL OPERATIONS, COMPRESS FOR DIRECT_LOAD OPERATIONS and NOCOMPRESS, for a table in Oracle 11.2.
    I know, we can go thru documentation and make a decision.
    Still I have run some very simple tests here.
    Case 1. Create table with COMPRESS FOR DIRECT_LOAD OPERATIONS and then update few records
    Case 2. Create table with COMPRESS FOR ALL OPERATIONS and then update few records
    Case 3. Create table with NOCOMPRESS and update few rows
    I know, Case 1 is a real dummy, but still I did that to see difference between Case1 and Case2.
    --  ---------- CASE 1 --------
    SQL> create table aaa
      2  nologging
      3  compress for direct_load operations
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.00
    SQL> select count(*) from aaa ;
      COUNT(*)
         50317
    Elapsed: 00:00:00.11
    SQL> update aaa set created=sysdate where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    *Elapsed: 00:00:05.43*
    SQL> commit;
    Commit complete.
    Elapsed: 00:00:00.04
    SQL>
    --  ---------- CASE 2 --------
    SQL>
    SQL> create table bbb
      2  nologging
      3  compress for all operations
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.01
    SQL> select count(*) from bbb ;
      COUNT(*)
         50318
    Elapsed: 00:00:00.20
    SQL> update bbb set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    *Elapsed: 00:00:05.31*
    SQL> commit;
    Commit complete.
    Elapsed: 00:00:00.04
    SQL>
    SQL>
    --  ---------- CASE 3 --------
    SQL> create table ccc
      2  nologging
      3  nocompress
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:01.84
    SQL> select count(*) from ccc ;
      COUNT(*)
         50319
    Elapsed: 00:00:00.15
    SQL> update ccc set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    *Elapsed: 00:00:00.06*Case1 and Case2 took 5.43 and 5.31 seconds respectively. Case 3 took 0.06 seconds.
    Difference is drastic.
    Am I doing wrong kind of test (lets be honest)?
    Should we not use compression for OLTP systems (or any systems with reasonable updates)?
    Apart from allowing to drop a column, what is the difference between COMPRESS FOR ALL OPERATIONS and COMPRESS FOR DIRECT_LOAD OPERATIONS ? where/how can I see that difference?
    Thoughts please.
    Thanks in advance.

    Hi,
    I have realised that I am using the syntax which is deprecated in 11.2.
    So I am doing the same test with
    COMPRESS BASIC
    COMPRESS FOR OLTP
    instead of
    COMPRESS FOR DIRECT_LOAD OPERATIONS (deprecated)
    COMPRESS FOR ALL OPERATIONS (deprecated)
    But the results are same. Even if I do COMPRESS FOR OLTP, my update is taking 5.4 seconds which is not very different from COMPRESS BASIC
    -- --------- CASE 1 ---------------
    SQL> create table aaa
      2  nologging
      3  compress basic
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.46
    SQL>
    SQL> select count(*) from aaa ;
      COUNT(*)
         50318
    Elapsed: 00:00:00.11
    SQL>
    SQL> update aaa set created=sysdate where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    Elapsed: 00:00:05.48
    -- ---------- CASE 2 ---------------
    SQL> create table bbb
      2  nologging
      3  compress for oltp
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.01
    SQL>
    SQL> select count(*) from bbb ;
      COUNT(*)
         50319
    Elapsed: 00:00:00.12
    SQL>
    SQL> update bbb set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    Elapsed: 00:00:05.25
    -- ---------- CASE 3 ---------------
    SQL> create table ccc
      2  nologging
      3  nocompress
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:01.81
    SQL>
    SQL> select count(*) from ccc ;
      COUNT(*)
         50320
    Elapsed: 00:00:00.10
    SQL>
    SQL> update ccc set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    Elapsed: 00:00:00.04Any thoughts??

  • Create table, PARTITION, compress ORACLE SUPPORT PLS !

    Can someone PLEASE explain to me the following (read carefully):
    SQL> create table abc
    2 (a number)
    3 PARTITION BY LIST(a)
    4 (PARTITION A_A values (2),
    5 PARTITION A_B values (DEFAULT) COMPRESS);
    Table created.
    SQL> alter table abc add b number;
    alter table abc add b number
    ERROR at line 1:
    ORA-22856: cannot add columns to object tables
    SQL> alter table abc modify partition A_B nocompress;
    Table altered.
    SQL> alter table abc add b number;
    alter table abc add b number
    ERROR at line 1:
    ORA-22856: cannot add columns to object tables
    SQL> drop table abc;
    Table dropped.
    SQL> create table abc
    2 (a number)
    3 PARTITION BY LIST(a)
    4 (PARTITION A_A values (2),
    5 PARTITION A_B values (DEFAULT));
    Table created.
    SQL> alter table abc modify partition A_B compress;
    Table altered.
    SQL> alter table abc add b number;
    Table altered.
    I definetelly think this is a BUG !

    14464, 00000, "Compression Type not specified"
    // *Cause: Compression Type was not specified in the Compression Clause.
    // *Action: specify Compression Type in the Compression Clause.                                                                                                                                                                                                                                                                                                                                                                                   

  • Create Table Trigger to replicate data from MSSQL2K5 to Oracle 11G on Linux

    I am trying to create a trigger on my MSSQL 2k5 server so that when a record is inserted, a replicated record is created in a table on an Oracle 11g database on a Linux server (Oracle Linux 6).
    Creating the trigger is easy, but when I test it I am getting an error stating the following:
    .NetSqlClient Data Provider The operation could not be performed because OLE DB Provider 'OraOLEDB.Oracle' for linked server "<myserver>" was unable to begin the distributed transaction.
    OLEDB Provider "OraOLEDB.Oracle" for linked server "<myserver>" returned: "New transaction cannot enlist in the specified transaction coordinator"
    Here is the trigger (MSSQL):
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE PROCEDURE insert_aban8_state
        @an8 int,
        @st nvarchar(3)
    AS
    BEGIN
        SET NOCOUNT ON;
        declare @c numeric
        select @c = count(*) from [e9db]..[CRPDTA].[ABAN8_STATE$] where alan8=@an8 and aladds=@st
        if(@c =0)
         begin
            insert into [e9db]..[CRPDTA].[ABAN8_STATE$]
            values(@an8, @st)
         end
        END
    GO
    After reviewing the MS Transaction Coordinator, I am now totally confused. I checked the services and have the MS DTC enabled and running, but am not sure what to do on the Linux side.
    Does the Oracle Services for Microsoft Transaction Server (OraMTS) work on Linux? I could only find references for this for Oracle 11g on Windows.
    What do I need to do to enable this replication via mssql table trigger to Oracle11g on Linux?

    nsidev wrote:
    While I would agree in part, it appears from the message that the trigger is requiring the Transaction Service to be enabled on both the host and target. The point of this post is to determine what, if anything, I need to do on my Oracle DB to allow the trigger to complete successfully.
    There are many posts found with Google concerning the OraMTS service on the Oracle system, but they all appear to be for Windows based systems. My question is, is this service part of the Linux based Oracle DB and if so, how do I initialize it?
    If I am mistaken and this is truly an issue with the MSSQL server, I will replicate the post in those forums. I am just looking for direction and help.
    1) I have NEVER heard that Oracle has, knows about, or supports any "Transaction Service".
    2) Consider what I previously posted regarding the flavor of client source.
    If your assertion about this mythical service were correct, then the Oracle DB would have to be able to "know" that this client connection was originated by SQL Server.
    I don't understand how or why Oracle should behave differently depending upon whether INSERT is done inside or outside a MS SQL Server trigger.
    Please explain & elaborate why Oracle should behave different depending upon the source of any INSERT statement.
    3) From Oracle DB standpoint an INSERT is an INSERT; regardless of the client.

  • The danger of memory target in Oracle 11g - request for discussion.

    Hello, everyone.
    This is not a question, but kind of request for discussion.
    I believe that many of you heard something about automatic memory management in Oracle 11g.
    The concept is that Oracle manages the target size of SGA and PGA. Yes, believe it or not, all we have to do is just to tell Oracle how much memory it can use.
    But I have a big concern on this. The optimizer takes the PGA size into consideration when calculating the cost of sort-related operations.
    So what would happen when Oracle dynamically changes the target size of PGA? Following is a simple demonstration of my concern.
    UKJA@ukja116> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    -- Configuration
    *.memory_target=350m
    *.memory_max_target=350m
    create table t1(c1 int, c2 char(100));
    create table t2(c1 int, c2 char(100));
    insert into t1 select level, level from dual connect by level <= 10000;
    insert into t2 select level, level from dual connect by level <= 10000;
    -- First 10053 trace
    alter session set events '10053 trace name context forever, level 1';
    select /*+ use_hash(t1 t2) */ count(*)
    from t1, t2
    where t1.c1 = t2.c1 and t1.c2 = t2.c2
    alter session set events '10053 trace name context off';
    -- Do aggressive hard parse to make Oracle dynamically change the size of memory segments.
    declare
      pat1     varchar2(1000);
      pat2     varchar2(1000);
      va       number;
      vc       sys_refcursor;
      vs        varchar2(1000);
    begin
      select ksppstvl into pat1
        from sys.xm$ksppi i, sys.xm$ksppcv v   -- views for x$ table
        where i.indx = v.indx
        and i.ksppinm = '__pga_aggregate_target';
      for idx in 1 .. 10000000 loop
        execute immediate 'select count(*) from t1 where rownum = ' || (idx+1)
              into va;
        if mod(idx, 1000) = 0 then
          sys.dbms_system.ksdwrt(2, idx || 'th execution');
          select ksppstvl into pat2
          from sys.xm$ksppi i, sys.xm$ksppcv v   -- views for x$ table
          where i.indx = v.indx
          and i.ksppinm = '__pga_aggregate_target';
          if pat1 <> pat2 then
            sys.dbms_system.ksdwrt(2, 'yep, I got it!');
            exit;
          end if;
        end if;
      end loop;
    end;
    -- As to alert log file,
    25000th execution
    26000th execution
    27000th execution
    28000th execution
    29000th execution
    30000th execution
    yep, I got it! <-- the pga target changed with 30000th hard parse
    -- Second 10053 trace for same query
    alter session set events '10053 trace name context forever, level 1';
    select /*+ use_hash(t1 t2) */ count(*)
    from t1, t2
    where t1.c1 = t2.c1 and t1.c2 = t2.c2
    alter session set events '10053 trace name context off';With above test case, I found that
    1. Oracle invalidates the query when internal pga aggregate size changes, which is quite natural.
    2. With changed pga aggregate size, Oracle recalculates the cost. These are excerpts from the both of the 10053 trace files.
    -- First 10053 trace file
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
    Compilation Environment Dump
    _smm_max_size                       = 11468 KB
    _smm_px_max_size                    = 28672 KB
    optimizer_use_sql_plan_baselines    = false
    optimizer_use_invisible_indexes     = true
    -- Second 10053 trace file
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
    Compilation Environment Dump
    _smm_max_size                       = 13107 KB
    _smm_px_max_size                    = 32768 KB
    optimizer_use_sql_plan_baselines    = false
    optimizer_use_invisible_indexes     = true
    Bug Fix Control Environment10053 trace file clearly says that Oracle recalculates the cost of the query with the change of internal pga aggregate target size. So, there is a great danger of unexpected plan change while Oracle dynamically controls the memory segments.
    I believe that this is a desinged behavior, but the negative side effect is not negligible.
    I just like to hear your opinions on this behavior.
    Do you think that this is acceptable? Or is this another great feature that nobody wants to use like automatic tuning advisor?
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

    I made a slight modification with my test case to have mixed workloads of hard parse and logical reads.
    *.memory_target=200m
    *.memory_max_target=200m
    create table t3(c1 int, c2 char(1000));
    insert into t3 select level, level from dual connect by level <= 50000;
    declare
      pat1     varchar2(1000);
      pat2     varchar2(1000);
      va       number;
    begin
      select ksppstvl into pat1
        from sys.xm$ksppi i, sys.xm$ksppcv v
        where i.indx = v.indx
        and i.ksppinm = '__pga_aggregate_target';
      for idx in 1 .. 1000000 loop
        -- try many patterns here!
        execute immediate 'select count(*) from t3 where 10 = mod('||idx||',10)+1' into va;
        if mod(idx, 100) = 0 then
          sys.dbms_system.ksdwrt(2, idx || 'th execution');
          for p in (select ksppinm, ksppstvl
              from sys.xm$ksppi i, sys.xm$ksppcv v
              where i.indx = v.indx
              and i.ksppinm in ('__shared_pool_size', '__db_cache_size', '__pga_aggregate_target')) loop
              sys.dbms_system.ksdwrt(2, p.ksppinm || ' = ' || p.ksppstvl);
          end loop;
          select ksppstvl into pat2
          from sys.xm$ksppi i, sys.xm$ksppcv v
          where i.indx = v.indx
          and i.ksppinm = '__pga_aggregate_target';
          if pat1 <> pat2 then
            sys.dbms_system.ksdwrt(2, 'yep, I got it! pat1=' || pat1 ||', pat2='||pat2);
            exit;
          end if;
        end if;
      end loop;
    end;
    /This test case showed expected and reasonable result, like following:
    100th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    200th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    300th execution
    __shared_pool_size = 88080384
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    400th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    500th execution
    __shared_pool_size = 88080384
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    1100th execution
    __shared_pool_size = 92274688
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    1200th execution
    __shared_pool_size = 92274688
    __db_cache_size = 37748736
    __pga_aggregate_target = 58720256
    yep, I got it! pat1=83886080, pat2=58720256Oracle continued being bounced between shared pool and buffer cache size, and about 1200th execution Oracle suddenly stole some memory from PGA target area to increase db cache size.
    (I'm still in dark age on this automatic memory target management of 11g. More research in need!)
    I think that this is very clear and natural behavior. I just want to point out that this would result in unwanted catastrophe under special cases, especially with some logic holes and bugs.
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

  • Table parttioning in Oracle 9i

    Does table partitioning in Oracle 9i has to do anything with Oracle licensing?

    Yes. When you login using sqlplus, you get this message,
    SQL*Plus: Release 9.2.0.3.0 - Production on Tue Jun 30 13:36:25 2009
    Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.4.0 - Production
    SQL>Cheers
    Sarma.

  • Oracle 11g R1 silent install, skip final input

    Hello all,
    I am attempting to automate the install of Oracle 11g R1 software. I have the silent install working correctly. Upon on completion it says:
    "The installation of Oracle Database 11g was successful.
    Please check '/u01/app/oracle/oraInventory/logs/silentInstall<date>.log' for more details."
    It then sits and waits for the user to hit enter or any key to exit. Given that I am trying to do all of this and more via a shell script, I cannot give it an input manually. Is there a way to force the installation to exit at this point (via a parameter or some sort of shell command or however?) and continue with the next step in the shell script I have written?
    Your help is appreciated, thank you.

    I found that there is a "-nowait" parameter that does exactly what I want it to do, but there is no linux counterpart. I have tried -noconsole also thinking that if there is no console allocated, that it wouldnt prompt, but it still does. Any help would be appreciated.

  • How to connect from Oracle 11g to SQL Server 2008 R2

    Hi,
    Is it possible to connect from Oracle 11g on AIX to SQL Server 2008 R2? If so, what is the preferred method?
    SQL Server has the original table. From Oracle 11g, we want to access data which is in SQL Server real time.
    Thank You
    Sarayu

    Hi,
    Have a look at these Oracle notes for the full information on the gateways -
    Master Note for Oracle Gateway Products (Doc ID 1083703.1)
    Functional Differences Between DG4ODBC and Specific Database Gateways (Doc ID 252364.1)
    Gateway and Generic Connectivity Licensing Considerations (Doc ID 232482.1)
    How to Setup DG4MSQL (Oracle Database Gateway for MS SQL Server) 64bit Unix OS (Linux, Solaris, AIX,HP-UX) (Doc ID 562509.1)
    How to Configure DG4ODBC on 64bit Unix OS (Linux, Solaris, AIX, HP-UX Itanium) to Connect to Non-Oracle Databases Post Install (Doc ID 561033.1)
    The Database Gateway for SQL*Server (DG4MSQL) needs a separate license but the Database Gateway for ODBC (DG4ODBC) is included in your RDBMS license. You only need to provide the third party ODBC driver needed by DG4ODBC.
    Regards,
    Mike

Maybe you are looking for

  • How to upload datas in excel sheet through BDC

    Hi,     I know how to upload datas in Text format  through BDC...Suppose even when datas are in .xls format,I saved that file as Text(tab delimited) format...then file become text format and it can be easily uploaded....     So, I want to know How to

  • Why can't I put a pre-keyed footage on my video without it freezing on the preview?

    I am trying to put a pre-keyed fire footage in my video to get a really cool fire effect, but when I use PIP, it always freezes in the preview. How do I fix this? Thanks.

  • Problem with Export Project as New Library

    Getting some behavior from Aperture that I do not understand. I am running Aperture 3.5.1 on OSX 10.9.1 on both a MacPro and a MacBookPro(Retina).  Both machines are believed to be completely up-to-date software-wise (no indication in App store that

  • 10.7 on a windows workgroup not recognized by windows devices

    I have an HP Digital Sender that used to work on my network just fine. It sends to the shared user folder (in a sub folder) and it was working great under 10.6 sending to \\CANAVERAL\Shared\DigitalSenderDropbox\ The machine name is set properly in WI

  • Redo Archive Logs Missing

    Hi Gurus While Configuring Data Guard for ORacle 10g (10.2.0.4) 64 bits on Windows 2007 Server 64 bits. I got few questions 1. What is the Default mode of Standby Database? 2. Should we Always Start Physical Standby Database to Recover Missing Redo A