Issue while Instantiating a table in logical standby

I am trying to instantiate the tables in my logical standby and i am getting this error
ORA-39006: internal error
ORA-06512: at "SYS.DBMS_LOGSTDBY", line 636
ORA-06512: at line 1
i used the below command to instantiate the table
EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE('FLXUSER','JOB_ACTION', 'NGMES_PROD')
i have also provide all the required privilege i.e LOGSTDBY_ADMINISTRATOR and DBA.still i am getting the error.
In alertlog i found the Import struck warning as the job is getting struck.
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=31, OS id=4648
         to execute - SYS.KUPM$MCP.MAIN('SYS_IMPORT_TABLE_01', 'LH137', 'KUPC$C_1_20131218044529', 'KUPC$S_1_20131218044529', 0);
Wed Dec 18 04:47:54 2013
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=51, OS id=3076
         to execute - SYS.KUPM$MCP.MAIN('SYS_IMPORT_TABLE_01', 'LH137', 'KUPC$C_1_20131218044754', 'KUPC$S_1_20131218044754', 0);
Wed Dec 18 04:49:41 2013
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=52, OS id=4128
         to execute - SYS.KUPM$MCP.MAIN('SYS_IMPORT_TABLE_01', 'FLXUSER', 'KUPC$C_1_20131218044941', 'KUPC$S_1_20131218044941', 0);
which i think are just warning.
kindly help me out

Yes that could very well be the reason. You can try recompiling. If it does not fix, you can reload the whole datapump utility by using following metalink note:
How To Reload Datapump Utility EXPDP/IMPDP (Doc ID 430221.1)

Similar Messages

  • Creating new tables in Logical Standby database

    Hi
    I have a requirement to create new tables in logical standby database. These tables will not be present on primary database. Is it possible to do this ?
    I have a new schema already created which has the privilege to CREATE new table.
    I have stopped the logical standby apply.
    When I am now trying to create a new table but it is failing with error : insufficient privileges.
    When I am trying to run below statement on new schema it is also failing with error of insufficient privileges.
    alter session disable dataguard;
    Please help.

    user8819121 wrote:
    Thanks Mahir,
    I was able to create the table after logging in as sysdba.
    But I need my user on that table to execute DML statements. My user has privileges to insert,delete and update any table.
    I tried the following statements to disable the guard but it is sill not working
    ALTER DATABASE GUARD STANDBY.
    Do I need to skip the tables created using dbms_logstdby package to not making it part of SQL Apply ? I guess not since the table is not in primary database.
    Amit
    You can skip only on primary, your created schema on Standby is not in primary.
    Then you must change Status of data guard to NONE. NONE is means is not any security on your data.
    In Guard status NONE can change all schema data.
    Please check link: http://docs.oracle.com/cd/E11882_01/server.112/e10700/manage_ls.htm#CHDGFGHG
    Following tests on user created before guard status is change from ALL to STANDBY.
    C:\Users\Administrator>sqlplus / as sysdba
    SQL> conn test/test
    Connected.
    SQL> select table_name from user_tables;
    TABLE_NAME
    T
    SQL> insert into t values(22);
    insert into t values(22)
    ERROR at line 1:
    ORA-16224: Database Guard is enabled
    SQL> conn / as sysdba
    Connected.
    SQL> select guard_Status from  v$database;
    GUARD_S
    ALL
    SQL> alter  database guard standby;
    Database altered.
    SQL> conn test/test
    Connected.
    SQL> insert into t values(1);
    insert into t values(1)
    ERROR at line 1:
    ORA-16224: Database Guard is enabled
    SQL> conn / as sysdba
    Connected.
    SQL> select guard_Status from  v$database;
    GUARD_S
    STANDBY
    SQL> alter  database guard none;
    Database altered.
    SQL> select guard_Status from  v$database;
    GUARD_S
    NONE
    SQL> conn test/test
    Connected.
    SQL> insert into t values(1);
    1 row created.
    SQL> commit;
    Commit complete.And Now I want share with you new tests :)
    Now user creating when after guard status change
    SQL> drop  user test cascade;
    User dropped.
    SQL> select guard_status from v$database;
    GUARD_S
    STANDBY
    SQL> create user test identified by test;
    User created.
    SQL> grant create session,  resource, create table to test;
    Grant succeeded.
    SQL> conn test/test
    Connected.
    SQL> create table t (n number);
    Table created.
    SQL> insert into t values(1);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL>It means when guard status is ALL then all of user created guarding.
    When you changed status to STANDBY then Logical Standby guard only primary schema and created schema before change.
    NONE is not guard any schema. it means you can delete standby schema data too.
    Regards
    Mahir M. Quluzade
    Edited by: Mahir M. Quluzade on Apr 19, 2013 4:07 PM

  • Audit DBA Activity, skip table from logical standby!

    Dear All,
    My database is 10gR2 on windows 2003 server.
    I want to know if I can put some audit on the commands: execute dbms_logstdby.skip() to skip tables from Logical standby and also the same on unskipping the objects exec dbms_logstdby.unskip.
    Thanks, Imran

    Hi,
    Given that your Database is 10g, the auditing options were extended to included DML from only SELECT in 9i, but not audit on procedures.  You could double check the Fine Grained Auditing options in 10g, but I don't think this extends to DBMS_ packages.
    I would consider writing a trigger or a small job that monitors the DBA_LOGSTDBY_SKIP view for additional entries.  This is the only workaround that I can suggest that might fit your needs.

  • Updating tables in logical standby database

    Dear DBAs,
    Is it possible to update non replicated tables in the logical standby database, but have the same schema name?
    "Alter session disable guard" works only for the current session, in fact i want it for all connected users whithout stopping the standby apply ?

    hi,
    Let's say in the primary database i have 10 tables, and using the RMAN i have created a Ph. standby database and then converted to a logical.
    so now i have the same schema and the same tables in these servers.
    the fact of using logical stdby is to be able to make DML transactions.
    so the issue is that from the 10 tables i need 9 tables to be replicated and the last one will not be replicated, so the application connected to the log. stdby will use only this table to update the user record(login date, logout date...bla bla bla).
    the problem is that when the user will connect to the database the application will insert a record like "insert into SCHEMANAME.tablename....."
    the problem is that the schema name is added into the insert statement.
    Thank you in advance

  • Create table on Logical Standby Database

    Dear DBAs,
    here is my situation;
    1. My primary database (where the tables' owner is "ICC" having a DBA) is used for insert/update transactions.
    2. The logical stdby DB (dataguard structure with MAX availability) is used to generate reports, after updating some tables owned by the same user "ICC". note that these tables are excluded from the replication.
    3. The developers might need to create this kind of tables to generate another kind of reports.
    The issue is that when the standby apply is enabled or disabled, the user ICC is not able to create a new table, it gives ORA-01031: insufficient privileges. note that it's not practical to ask the DBA to disable the "APPLY" at any table creation.
    Do you have an idea about how to resolve.
    My database is 10gR2 path.10.2.0.4.0 on Windows 2003
    Thank you in advance

    If you stop applying log on the logical standby you can easily create a table over there. See oracle docs. The following list shows >how to re-create a table and restart SQL Apply on that table:
    >
    Stop SQL Apply:
    SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    Ensure no operations are being skipped for the table in question by querying the DBA_LOGSTDBY_SKIP view:
    SQL> SELECT * FROM DBA_LOGSTDBY_SKIP;
    ERROR STATEMENT_OPT OWNER NAME PROC
    N SCHEMA_DDL HR EMPLOYEES
    N DML HR EMPLOYEES
    N SCHEMA_DDL OE TEST_ORDER
    N DML OE TEST_ORDER
    Because you already have skip rules associated with the table that you want to re-create on the logical standby >database, you must first delete those rules. You can accomplish that by calling the DBMS_LOGSTDBY.UNSKIP >procedure. For example:
    SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP(stmt => 'DML', -
    schema_name => 'HR', - object_name => 'EMPLOYEES');SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP(stmt >=> 'SCHEMA_DDL', -
    schema_name => 'HR', - object_name => 'EMPLOYEES');
    Re-create the table HR.EMPLOYEES with all its data in the logical standby database by using the >DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure. For example:
    SQL> EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE(shema_name => 'HR', -
    object-_name => 'EMPLOYEES', -
    dblink => 'BOSTON');
    Start SQL Apply:
    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;Regards.

  • Skipping dependent Tables in Logical Standby

    Hello DBAs
    I need your expertise here. Let me explain the scenario. Suppose a table is skipped in a logical standby. This table is referred by other tables and there are dependencies on this table. Now my question is what happens when a transaction is committed at the primary which is dependent on this table ?
    Does the transaction go through even though that table is not replicated. What happens to data integrity ?
    I appreciate your help. Thanks.

    Have a go at
    [The Documentation...|http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ls.htm#SBYDB00800]

  • Issue while Updating a table having Unique Secondary Index

    Hi,
    I am trying to update a 'Z' table in which there are 5 fields comprising of primary key. Out of them 2 key fields are defined as a part of seconadry index with 'Unique' option selected.
    As per the requirement, I am trying to update the table using modify statement so whenever this statement occurs it will check the primary keys and accordingly try to m

    958572 wrote:
    Hi,
    We have observed the exception *'JDBC activity timed out while updating the table -Table1"* in our logs in prod environment.
    1)we verified the AWR report for that particular time and observed that one update statement was trying to update the table table1.
    its a simple update statement as below
    UPADTE TABLE1 SET COL1=VAL1,COL2=VAL2 WHERE ID1=VAL1 AND ID2=VAL2;
    there is a PK index on ID2 column.
    2)we also came to know that there were no locks on TABLE1 during this time.
    can some one please let me know what could be the possible reason for this kind of exception?
    ThanksOS/Networking mis-configuration.
    Oracle does not know or care about the type or flavor of remote client (JDBC, OCI, ODBC, etc).
    Oracle's default configuration contains no timeout.
    I suspect a FireWall setting.

  • Issues while joining two tables as the joining column has duplicate values - Please help!

    Hi,
    I have a table A -which has few columns including a Amount column - I am joining this table A to Table B. The joining column in B has duplicates.  So, the number of records are getting more after the joining. As per the requirment when I create a table
    after joining the tables and count the salary clumn, there is a difference in the counting. How can I solve this? Can you please help me?
    Here is the DDL and sample values
    create table #student (sid int, name varchar(10),salary int)
    create table [#address] (sid int, city varchar(10),grade char(1),lineneumber int)
    insert into #student values (1,'sachin',8000)
    insert into #student values (2,'Dhoni',2000)
    insert into #student values (3,'Ganguly',7000)
    insert into #student values (4,'Kohli',1000)
    insert into [#address] values(1,'mumbai','A',1)
    insert into [#address] values(1,'mumbai','B',2)
    insert into [#address] values(1,'mumbai','C',3)
    insert into [#address] values(1,'mumbai','D',4)
    insert into [#address] values(2,'JARKHAND','D',3)
    insert into [#address] values(2,'JARKHAND','D',4)
    SELECT S.SID,NAME,salary,CITY ,grade,linenumber
    into #FINAL
    FROM #STUDENT S
    LEFT JOIN #ADDRESS A
    ON S.SID=A.SID
    SELECT SUM(salary) FROM #FINAL
    --44000
    Final result should be 18000 , but it is coming as 44000. can you please help me to get the correct result - what do i do in the joining?
    In my real project, i have 5 tables joining, each table have more than 30 columns and few joining tables joining column have duplicates. I have simplified the issue so that i can ask the question clearly. So,while answering, please consider that also in mind.
    thanks in advance for your help!

    SELECT S.SID,NAME,salary,CITY 
    into #FINAL
    FROM #STUDENT S
    LEFT JOIN (SELECT DISTINCT sid,city
    FROM #Address) A
    ON S.SID=A.SIDthis will do a join on student table and city table with unique sid and city name so adddress selection will be sid city1 mumbai2 jarkand

  • Issue while creating ADF Table programatically

    Hi,
    I am trying to create a table programatically...And i implemented like below:
                RichTable phoneTable = new RichTable();
                phoneTable.setEmptyText("No Phone Details yet");
                getContactPhone(contactObj.getPhone());
                phoneTable.setValue(contactObj.getPhone());
    // contactObj.getPhone() is a ArrayList<TelephoneBOD> in pojo Object...
    // Which is taken from a Object returned from DataControl Method (Captured in pageFlowScope)
                phoneTable.setVar("row");
                // Add Columns
                RichColumn column = new RichColumn();
                column.setHeaderText("Type");
                column.setId("phoneType");
                column.setAlign("right");
                column.setWidth("100");
                // Set output.
                RichOutputText output = new RichOutputText();
                output.setValue("#{row.phoneType}");
                // Add output into column.
                column.getChildren().add(output);
                // Add column into table.
                phoneTable.getChildren().add(column);When i try to implement like this i am getting fllowing error:
    popup:
    ZIP_STATE_FAILED
    ADF_FACES-60097:For more information, please see the server's error log for an entry beginning with: ADF_FACES-60096:Server Exception during PPR, #2log:
    Caused By: java.io.NotSerializableException: org.ieee.internal.ws.proxy.conf.types.TelephoneBOD
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1156)
         at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326)
         at java.util.ArrayList.writeObject(ArrayList.java:570)
         at sun.reflect.GeneratedMethodAccessor252.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:945)
         at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1461)
         at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1392)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150)
         at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1338)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1146)
         at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1338)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1146)
         at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1338)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1146)
         at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326)
         at org.apache.myfaces.trinidad.component.TreeState.writeExternal(TreeState.java:239)
         at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1421)
         at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1390)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150)
         at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1338)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1146)
         at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326)
         at org.apache.myfaces.trinidad.component.TreeState.writeExternal(TreeState.java:241)
         at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1421)
         at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1390)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150)
         at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1338)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1146)
         at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326)I think this is happening because i am not serializing the object passing to Table. If this is the Reason.. How to serialize the object?
    Or i am missing something in the code?
    Thanks
    Thoom
    Edited by: User007 on Aug 31, 2011 1:47 PM
    Edited by: User007 on Aug 31, 2011 1:47 PM

    Thanks Timo.. for simple solution.
    Actually.. TelephoneBOD is auto-generated on creation of web service proxy from web service.
    I don't think.. updating those Java files directly is a best practice. Because they will be changing if there is any change in web service schema..
    You think there is any other way to do so?
    Thanks
    Thoom

  • Trigger Issue while updating same table

    Hi all,
    I am creating one after insert trigger on table tests, which will check, if application id is null, application will be fetched from another table and update the table
    tests table. But this trigger is not updating the application id of the table tests
    CREATE OR REPLACE
    TRIGGER TB_REC_APPL_TESTS1
    AFTER INSERT ON tests FOR EACH ROW
    DECLARE
    v_application_id NUMBER;
    v_rec_appl_tests_id NUMBER;
    v_a_recruit_id NUMBER;
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    v_rec_appl_tests_id := :NEW.rec_appl_tests_id;
    v_a_recruit_id := :NEW.a_recruit_id;
    IF :NEW.a_applic_id IS NULL THEN
    SELECT a_applic_id INTO v_application_id FROM recruit WHERE a_recrut_id = v_a_recruit_id;
    dbms_output.PUT_LINE(v_application_id||'-'||v_rec_appl_tests_id);
    UPDATE tests SET a_applic_id = v_application_id
    WHERE rec_appl_tests_id = v_rec_appl_tests_id;--:NEW.rec_appl_tests_id;
    END IF;
    commit;
    END;
    Thanks in advance,
    Pal

    user546710 wrote:
    Hi all,
    I am creating one after insert trigger on table tests, which will check, if application id is null, application will be fetched from another table and update the table
    tests table. But this trigger is not updating the application id of the table tests
    CREATE OR REPLACE
    TRIGGER TB_REC_APPL_TESTS1
    AFTER INSERT ON tests FOR EACH ROW
    DECLARE
    v_application_id NUMBER;
    v_rec_appl_tests_id NUMBER;
    v_a_recruit_id NUMBER;
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    v_rec_appl_tests_id := :NEW.rec_appl_tests_id;
    v_a_recruit_id := :NEW.a_recruit_id;
    IF :NEW.a_applic_id IS NULL THEN
    SELECT a_applic_id INTO v_application_id FROM recruit WHERE a_recrut_id = v_a_recruit_id;
    dbms_output.PUT_LINE(v_application_id||'-'||v_rec_appl_tests_id);
    UPDATE tests SET a_applic_id = v_application_id
    WHERE rec_appl_tests_id = v_rec_appl_tests_id;--:NEW.rec_appl_tests_id;
    END IF;
    commit;
    END;
    Thanks in advance,
    Palyou are creating triger on the table and updating it. It will not allow to update since you are firing trigger on same table.
    Best practice is to use other table to update or any other operation.

  • Logical Standby Problem

    My environment is Primary database is 11.1.0.7 64bit on Windows 2003 Enterprise 64bit. Logical is on the same platform and oracle version but a different server. I created a physical standby first and it applied the logs quickly without any issues. I received no errors when I changed it over to a Logical standby database. The problem that is happening is as soon as I issue the command "alter database start logical standby apply;" the CPU usage goes to 100% and the SQL apply takes a long time to apply a log. When I was doing this on 10G I never ran into this, as soon as the log was received, it was applied within a couple of minutes. I don't think it could be a memory issue since there is plenty on the Logical standby server. I just can't figure out why the SQL apply is so slow and the CPU usage skyrockets. I went through all of the steps in the guide "Managing a Logical Standby Database" from Oracle and I don't see anything wrong. The only difference between the two databases is that on the Primary I have Large Page support enabled, I don't on the Logical. Any help would be greatly appreciated, I need to use this Logical to report off of.

    Thanks for the responses. I have found what is causing the problem. I kept noticing that the statements it was slowing down on were the ones where data was being written to the SYS.AUD$ table in the System tablespace on the Logical Standby database. A quick count of the records showed that I had almost 6 million records in that table. After I decided to truncate SYS.AUD$ on the Logical, the archive logs started to apply normally. I wonder why the Logical has a problem with this table and the Primary doesn't. I didn't even know auditing was turned on on the Primary database, it must be enabled by default. Now I know why my System table space has grown from 1gb to 2gb since November.
    Now that I fixed it for now, I am unsure what to do to keep this from happening. Can I turn off Auditing on the Logical and keep it on for the Primary? Would this stop data from being written to the SYS.AUD$ table on the Logical? It doesn't appear that there is any kind of cleanup on this table that is offered by Oracle, I guess I can just clean out this table occasionally but that is just another thing to add to the list of maintenance tasks. I notice that you can also write this audit data to a file on the OS. Has anyone here done that?

  • Resynchronization in  logical standby db

    hi,
    i am new in oracle 10g logical standby database.
    last night i have updated some new tables & views on primary database. but new tables & views are not showing at logical standby database for last 20 hours.
    please tell how can i retrieve new tables on logical standby server

    Hi;
    For your issue i suggest close your thread here as changing thread status to answered and move it to Forum Home » Database » Data Guard which you can get more quick response
    Regard
    Helios

  • Error while shrinking a table

    Hi All,
    I am facing issue while shrinking a table PARAMETER_DETAIL which is a IOT partition table.
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
    PL/SQL Release 10.2.0.2.0 - Production
    CORE    10.2.0.2.0      Production
    TNS for Solaris: Version 10.2.0.2.0 - Production
    NLSRTL Version 10.2.0.2.0 - Production
    SunOS usa0300ux636 5.10 Generic_138888-03 sun4v sparc SUNW,Sun-Blade-T6320
    When i am firing the below query,
    alter table PARAMETER_DETAIL SHRINK space CASCADE;
    alter table PARAMETER_DETAIL SHRINK space CASCADE*
    ERROR at line 1:
    ORA-10635: Invalid segment or tablespace type
    I am getting this error.
    Moreover, I have checked in dba_segments, the output is
    SQL> l
    1 select unique SEGMENT_NAME,PARTITION_NAME,SEGMENT_TYPE,TABLESPACE_NAME from dba_segments where SEGMENT_NAME='PARAMETER_DETAIL'
    2* and OWNER='CDE'
    SEGMENT_NAME
    PARTITION_NAME SEGMENT_TYPE TABLESPACE_NAME
    PARAMETER_DETAIL
    PD_201107220130 INDEX PARTITION DATAFEED
    PARAMETER_DETAIL
    PD_201107211630 INDEX PARTITION DATAFEED
    PARAMETER_DETAIL
    PD_201107212030 INDEX PARTITION DATAFEED
    SEGMENT_NAME
    PARTITION_NAME SEGMENT_TYPE TABLESPACE_NAME
    PARAMETER_DETAIL
    PD_201107212100 INDEX PARTITION DATAFEED
    PARAMETER_DETAIL
    PD_201107210330 INDEX PARTITION DATAFEED
    PARAMETER_DETAIL
    PD_201107210630 INDEX PARTITION DATAFEED
    SEGMENT_NAME
    PARTITION_NAME SEGMENT_TYPE TABLESPACE_NAME
    PARAMETER_DETAIL
    PD_201107210830 INDEX PARTITION DATAFEED
    PARAMETER_DETAIL
    PD_201107211030 INDEX PARTITION DATAFEED
    1490 rows selected.
    SQL> select TABLESPACE_NAME,SEGMENT_SPACE_MANAGEMENT from dba_tablespaces;
    TABLESPACE_NAME SEGMEN
    SYSTEM MANUAL
    UNDOTBS MANUAL
    SYSAUX AUTO
    TEMP MANUAL
    USERS AUTO
    DATAFEED AUTO
    6 rows selected.
    Please help me out to shrink this table.
    Thanks

    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/schema.htm#ADMIN10161
    Shrink operations can be performed only on segments in locally managed tablespaces with automatic segment space management (ASSM). Within an ASSM tablespace, all segment types are eligible for online segment shrink except these:
    IOT mapping tables
    Tables with rowid based materialized views
    Tables with function-based indexesSince this is an IOT, any chance there is an IOT mapping table?
    Since this appears to be a data warehouse, any chance there are ROWID based materialized views or function-based indexes?
    Justin

  • Create an Object in Logical standby database in Oracle 10G

    Hi,
    I have logical standby database in oracle 10g R2 for reporting purpose.Now i want to create procedure in logical standby which is use to create new temp table in logical standby and contained data generated from select operation on existing table.
    Can i create a new user in logical standby database,add new tablespace in logical standby which can insert,update and delete data in standby database base table?
    kindly provide me the steps to implement all this.What will be effect if i set guard_Status in v$database is NONE in logical standby?
    Thanks,
    Shital Patel

    Hi Shital,
    Guard_status protects the data from being changed.
    ALL- By default it is not possible for a non-privileged user to modify data on a data guard SQL apply database. This is because the database guard is automatically set to ALL.
    With this level of security, only SYS user can modify the data.
    STANDBY- When you set this level of security, users are able to modify data that is not maintained by the Logical apply engine.
    NONE permits any users to access the standby database as long as they have correct privileges. This is the normal security for all data in the database.
    You can change the guard status value from ALL to NONE in order to allow non-privileged users to modify data and Yes you can create user and extra tablespace in logical standby database..this is what the one of advantage of using Logical standby database.
    SQL> ALTER DATABASE GUARD NONE;
    Thanks

  • Controlling "Log apply" on Logical standby.

    Hi,
    We are going Live with logical standby ina day or two, and suddenly found what seems to be a potential problem in the near future.
    Despite issuing the command
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;Logs are not getting applied immediately. Each of the logfile is taking somewhere around 5 to 10 minutes. That, was considered not so serious a problem. But sometimes, we find that there are too many logfiles left behind, and it is taking more than a day to get applied. I don't know how to prevent such a situation. I don't know how to get them all applied if at all such situation arises.
    DB Version: 10.2.0.4
    O/S: RHEL 4
    Logfile size: 50M
    Please help. I will be prepared with all documents to face this situation.
    Aswin.
    Edited by: ice_cold_aswin on May 6, 2010 7:12 PM

    In Logical Standby the redo generated on the primary is converted to SQL and applied at the Logical Standby DB.
    Check the network to see if there is any latency. And also check the views like v$system_event to get an overview of the wait events on the Primary DB.
    Once refer to the docs for the Logical Standby Wait events.
    Edited by: user8710159 on May 6, 2010 10:15 AM

Maybe you are looking for