Policy  - function_schema is not exp/imp properly

Hi,
I have a schema that contains a policy as follows:
BEGIN
SYS.DBMS_RLS.ADD_POLICY (
object_schema => 'DYAHAV'
,object_name => 'TBL_BASE'
,policy_name => MY_VPD_PREDICATE'
,function_schema =>NULL
,policy_function => 'MY_VPD_PREDICATE'
,statement_types => 'SELECT,INSERT,UPDATE,DELETE'
,policy_type => dbms_rls.dynamic
,long_predicate => FALSE
,update_check => FALSE
,static_policy => FALSE
,enable => TRUE );
END;
After export and import to other schema, the object_schema is updated correctly
but the function_schema is still DYAHAV.
I really don't know if it's a feature or a bug...
How can I avoid it?
Thanks
dyahav

try this, but replace the string 'SCHEMA_REQUIRED' with the functional schema name you require!
set serveroutput on
declare
  /* Replace the following string with whatever schema name is required for the functional schema */
  l_Schema_name varchar2(30) := 'SCHEMA_REQUIRED';
begin
  begin
    sys.dbms_rls.drop_policy(policy_name => 'MY_VPD_PREDICATE'
                          ,object_name => 'TBL_BASE');
    dbms_output.put_line('Policy, MY_VPD_PREDICATE, dropped');
    exception
      when others then
        dbms_output.put_line('Policy does not exist');
  end;
  sys.dbms_rls.add_policy (object_schema    => 'DYAHAV'
                          ,object_name      => 'TBL_BASE'
                          ,policy_name      => 'MY_VPD_PREDICATE'
                          ,function_schema  => l_Schema_name
                          ,policy_function  => 'MY_VPD_PREDICATE'
                          ,statement_types  => 'SELECT,INSERT,UPDATE,DELETE'
                          ,policy_type      => dbms_rls.dynamic
                          ,long_predicate   => false
                          ,update_check     => false
                          ,static_policy    => false
                          ,enable           => true );
  dbms_output.put_line('Policy, MY_VPD_PREDICATE, created');
  exception
    when others then
      dbms_output.put_line('Error ' || sqlerrm);
end;
/

Similar Messages

  • Refresh database through exp/imp

    version 10203 on windows (old db on solaris 8170)
    i Have to refresh database data, No of users/schemas are 400+. fastest way to do that would be to do full exp/imp.
    but 1st drop current users cascade. (any command to drop all users in with one command?)
    then validate all all tables/schemas are same & up-to-date( any suggestion to validate this efficiently?) i am thinking to check full exp logs on old & new db.(what that can take forever manually going through thousands of tables etc)

    First, make sure that statistics are properly gathered in both the old and the new DB.
    Next, use num_rows column in dba_tables. Since statistics are typically gathered by sampling, they would not necessarily match. They need to be quite close though.
    After you are sure that all objects were transferred, you can issue a query to find all tables with “invalid” number of records.
    The query can look something like this:
    select s.owner, d.table_name , d.num_rows
    from dba_tables d
    where d.owner not in (‘SYS’,’SYSTEM’,…)
    and not exists
               (select * from dba_tables@old_db s
              Where d.owner = s.owner
              And d.table_name = s.table_name
              And s.num_rows between 0.9*d.num_rows and 1.1*d.num_rows
              )You need to take care of some special cases, such as num_rows being NULL, partitioned tables, etc.
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • Exp/imp related problem

    Hi all,
    I have one problem in Oracle 9.2.0.8 onRHEL4. Iwant to use exp/imp to update one table from one server to another.
    Suppose I have one table abc on A server and it is having around 1 million records and another B server is also having the same no. of records and some of the record have been changed to A server and i also want to imp the same record to another server B.Is there any paramneter in IMP.
    Please suggest me.

    Hi,
    Kamran Agayev A. wrote:
    user00726 wrote:
    but how would i know which table has been updated or modifiedUsing MERGE funtion, you'll merge two tables. This command will updated changed rows and insert non-inserted rows to the second table and make it as the same as the first tableSQL> create table azar(pid number,sales number,status varchar2(20));
    Table created.
    SQL> create table azar01(pid number,sales number,status varchar2(20));
    Table created.
    SQL> insert into azar01 values(1,12,'CURR');
    1 row created.
    SQL> insert into azar01 values(2,13,'NEW');
    1 row created.
    SQL> insert into azar01 values(3,15,'CURR');
    1 row created.
    SQL> insert into azar values(2,24,'CURR');
    1 row created.
    SQL> insert into azar values(3,0,'OBS');
    1 row created.
    SQL> insert into azar values(4,42,'CURR');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from azar01;
    PID SALES STATUS
    1 12 CURR
    2 13 NEW
    3 15 CURR
    SQL> select * from azar;
    PID SALES STATUS
    2 24 CURR
    3 0 OBS
    4 42 CURR
    SQL> merge into azar01 a using azar b on (a.pid=b.pid) when matched
    2 then update set a.sales=a.sales + b.sales, a.status=b.status
    3 delete where a.status='OBS'
    4 when not matched
    5 then insert values(b.pid,b.sales,'NEW');
    3 rows merged.
    SQL> select * from azar01;
    PID SALES STATUS
    1 12 CURR
    2 37 CURR
    4 42 NEW
    Hello Sir, is this correct?
    Regards
    S.Azar
    DBA

  • Export / import  exp / imp commands Oracle 10gXE on Ubuntu

    I have Oracle 10gXE installed on Linux 2.6.32-28-generic #55-Ubuntu, and I need soe help on how to export / import the base with exp / imp commands. The commands seems to be installed on */usr/lib/oracle/xe/app/oracle/product/10.2.0/server/bin* directory but I cannot execute them.
    The error message I got =
    No command 'exp' found, did you mean:
    Command 'xep' from package 'pvm-examples' (universe)
    Command 'ex' from package 'vim' (main)
    Command 'ex' from package 'nvi' (universe)
    Command 'ex' from package 'vim-nox' (universe)
    Command 'ex' from package 'vim-gnome' (main)
    Command 'ex' from package 'vim-tiny' (main)
    Command 'ex' from package 'vim-gtk' (universe)
    Command 'axp' from package 'axp' (universe)
    Command 'expr' from package 'coreutils' (main)
    Command 'expn' from package 'sendmail-base' (universe)
    Command 'epp' from package 'e16' (universe)
    exp: command not found
    Is there something I have to do ?

    Hi,
    You have not set environment variables correctly.
    http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25144/toc.htm#BABDGCHH
    And of course that sciprt have small hickup, so see
    http://ubuntuforums.org/showpost.php?p=7838671&postcount=4
    Regards,
    Jari

  • Exp/Imp content Area Portlet..?

    How to Exp/Imp "Content area folder published as a portlet" which reside in "WWSBR_SITEBUILDER_PROVIDER".
    Anybody from Oracle..?
    Thanks.
    Rakesh

    I assume you are using the 309 version of portal. It is possible to export out an entire content area or a page but not the granular items. By granular I mean just the folder in this context. If you wish to export/import the entire content area/page, you can run the pageexp/pageimp/contexp/contimp scripts which reside in the WWU directory.

  • Exp/Imp In Oracle 10g client

    Hi All,
    I want to take one schema export from Oracle 10g Client. I am in new Oracle 10g.
    In oracle 9i, we can using exp/imp command for Export and Import from Client itself.
    I heard about in Oracle 10g, we can use Expdb/Impdb in Server only. Is there any possibility to take exp/imp in Client also.
    Pls help me...
    Cheers,
    Moorthy.GS

    To add up to expdb from oracle client
    NETWORK_LINK
    Default: none
    Purpose
    Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance.
    Syntax and Description
    NETWORK_LINK=source_database_link
    The NETWORK_LINK parameter initiates an export using a database link. This means that the system to which the expdp client is connected contacts the source database referenced by the source_database_link, retrieves data from it, and writes the data to a dump file set back on the connected system.
    The source_database_link provided must be the name of a database link to an available database. If the database on that instance does not already have a database link, you or your DBA must create one. For more information about the CREATE DATABASE LINK statement, see Oracle Database SQL Reference.
    If the source database is read-only, then the user on the source database must have a locally managed tablespace assigned as the default temporary tablespace. Otherwise, the job will fail. For further details about this, see the information about creating locally managed temporary tablespaces in the Oracle Database Administrator's Guide.
    Restrictions
    When the NETWORK_LINK parameter is used in conjunction with the TABLES parameter, only whole tables can be exported (not partitions of tables).
    The only types of database links supported by Data Pump Export are: public, fixed-user, and connected-user. Current-user database links are not supported.
    Example
    The following is an example of using the NETWORK_LINK parameter. The source_database_link would be replaced with the name of a valid database link that must already exist.
    expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_linkDUMPFILE=network_export.dmp LOGFILE=network_export.log

  • Full Database Exp & Imp

    Hi,
    I am trying to Exp & Imp a full database. I am working on Oracle 9i & 10g on Solaris 9. I am not using Data Pump. Can anyone please help me with the following:
    1. I am performing the full export using SYSTEM user.
    1.a Does a Full export include (or backups up) the DATA DICTIONARY of the database?
    1.b Does a Full export include the backup of SYS and SYSTEM objects?
    I am using the following command to export
    exp system/system@testdb file=$HOME/testdbfullexp.dmp full=y statistics=none
    I have tried importing the FULL export to another database and i did see that SYS and SYSTEM objects were also being imported ( got some errors regarding constraints and inconsistencies).
    I would like to ask like what are the ideal steps to follow to copy a database from DB1 to DB2 using EXP and IMP
    Any information will be of a great help
    Thanks,
    Harris.

    1 a) No, as the data dictionary will be automagicall recreated by implicit SQL.
    This means any non dictionary objects under SYS will be lost
    1 b) as above. SYSTEM however is a normal user.
    any %SYS user will NOT be exported (CTXSYS, MDSYS, etc)
    On import of SYSTEM there will be always errors, as SYSTEM is non-empty after initial database creation.
    Sybrand Bakker
    Senior Oracle DBA

  • Full database exp/imp  between RAC  and single database

    Hi Experts,
    we have a RAC database oracle 10GR2 with 4 node in linux. i try to duplicate rac database into single instance window database.
    there are same version both database. during importing, I need to create 4 undo tablespace to keep imp processing.
    How to keep one undo tablespace in single instance database?
    any experience of exp/imp RAC database into single instance database to share with me?
    Thanks
    Jim
    Edited by: user589812 on Nov 13, 2009 10:35 AM

    JIm,
    I also want to know can we add the exclude=tablespace on the impdp command for full database exp/imp?You can't use exclude=tablespace on exp/imp. It is for datapump expdp/impdp only.
    I am very insteresting in your recommadition.
    But for a full database impdp, how to exclude a table during full database imp? May I have a example for this case?
    I used a expdp for full database exp. but I got a exp error in expdp log as ORA-31679: Table data object "SALE"."TOAD_PLAN_TABLE" has long columns, and longs can not >be loaded/unloaded using a network linkHaving long columns in a table means that it can't be exported/imported over a network link. To exclude this, you can use the exclude expression:
    expdp user/password exclude=TABLE:"= 'SALES'" ...
    This will exclude all tables named sales. If you have that table in schema scott and then in schema blake, it will exclude both of them. The error that you are getting is not a fatal error, but that table will not be exported/imported.
    the final message as
    Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
    Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
    F:\ORACLEBACKUP\SALEFULL091113.DMP
    Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 1 error(s) at 16:50:26Yes, the fact that it did not export one table does not make the job fail, it will continue on exporting all other objects.
    . I drop database that gerenated a expdp dump file.
    and recreate blank database and then impdp again.
    But I got lots of error as
    ORA-39151: Table "SYSMAN"."MGMT_ARU_OUI_COMPONENTS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    ORA-39151: Table "SYSMAN"."MGMT_BUG_ADVISORY" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    ......ORA-31684: Object type TYPE_BODY:"SYSMAN"."MGMT_THRESHOLD" already exists
    ORA-39111: Dependent object type TRIGGER:"SYSMAN"."SEV_ANNOTATION_INSERT_TR" skipped, base object type VIEW:"SYSMAN"."MGMT_SEVERITY_ANNOTATION" >already exists
    and last line as
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 2581 error(s) at 11:54:57Yes, even though you think you have an empty database, if you have installed any apps or anything, it may create tables that could exist in your dumpfile. If you know that you want the tables from the dumpfile and not the existing ones in the database, then you can use this on the impdp command:
    impdp user/password table_exists_action=replace ...
    If a table that is being imported exists, DataPump will detect this, drop the table, then create the table. Then all of the dependent objects will be created. If you don't then the table and all of it's dependent objects will be skipped, (which is the default).
    There are 4 options with table_exists_action
    replace - I described above
    skip - default, means skip the table and dependent objects like indexes, index statistics, table statistics, etc
    append - keep the existing table and append the data to it, but skip dependent objects
    truncate - truncate the existing table and add the data from the dumpfile, but skip dependent objects.
    Hope this helps.
    Dean

  • Running OMBPlus and EXP/IMP in mixed version environment

    OWB Mixed Environment Guru's
    Current environment:
    OWB Client: 10.1.0.2.0 on Windows XP Professional
    OWB Server side: 10.1.0.2.0 on UNIX (AIX 5.2)
    Repository: Oracle 9.2.0.4 on UNIX (AIX 5.2)
    UNIX Listener: 9.2.0.4 on UNIX (AIX 5.2)
    Runtime Repository: Oracle 9.2.0.4 on UNIX (AIX 5.2)
    I call this a mixed environment since my OWB stuff is 10g and my database stuff is 9.2.
    Issues:
    1- I can't get the command line exp.sh script to connect to the repository and returns the famous 'ORA-12154, TNS:listener does not currently know of service requested in connect descriptor'. It looks like the 'owbsetenv.sh' script is changing the value of $ORACLE_HOME to point to the 10g areas. Could that be then causing the system to look for a 10g LISTENER which doesn't exist since all my databases are 9.2.0.4???
    2- I have the same issue trying to run OMBPlus.sh.
    I am ultimately trying to set up a promotion process using the UNIX command line programs (exp/imp and OMBPlus) to get objects from the TEST environment into the PRODUCTION environment which is a separate repository and target schema on a different machine.
    Any advice on how to successfully operate in this 'mixed' environment is most welcomed.
    Many thanks!
    Gary

    Well it looks like I did it again!
    Total brain fart.
    The problem turned out that I wasn't specifying the entire SERVICE_NAME for the repository database. I had been leaving off the domain information. Must be a habit from not having to use it in the TNSNAMES.ORA files.
    I was able to compelte my test export and connect to OMBPLUS and will now try my test import.
    Sorry to clutter the forum but if it helps anyone else with the same affliction I seem to have frequently, I guess that's a small reward.
    Until next time.
    Gary

  • Best way of using exp/imp

    Dear all,
    I wanted to migrate database from 8i to 11g(8.1.5 to 11.1.0). I am going for exp/imp method. Which is the best method of doing this task? I mean Full export and Import Or Schemawise export and import? Is there any chances of missing objects or rows while doing this task? If yes, How to avoid? Please help me to take a best decision. I dont want any problem after migration.
    Approach is (take exp of 8.1.5 and imp in 9.2.0) then (exp of 9.2.0 and imp it to 11.1.0.6)
    OS is HP Unix
    Nishant Santhan

    Have you not yet completed this task? As we have already answered to your question a couple of days back.
    Take a look at the similar duplicate thread created by you.
    Re: Migrating from 8i to 11g
    Regards,
    Sabdar Syed.

  • Exp/imp of tablespace

    hi all
    could we use exp/imp of tablespace on same database
    say we have list of objects in a tablespace and we want to shrink the tablespace size. ( as there are not many transactions ) then can we take export of this tablespace onyly , drop the tablespace , create a new tablesapce with same name but of less size and import objects back from the export dump file into this tablespace
    thanks
    kedar

    So basically you want to shrink the tablespace but you've got objects scattered around in there and it won't coalesce.
    There are several ways to do this, not least the one you've mentioned.
    An alternative would be to rebuild the objects into a new (smaller) tablespace. If you haven't got the disk space to accomodate a new tablespace and the existing one then, yes, export/import will do the job.

  • Migrating from 9i to 10g through exp/imp

    Hi,
    We need to migrate a database on 9i in HP-UNIX to 10g in IBM AIX. Can you please tell me whether i can export data in 9i with 9i export utility and import it into 10g using 10g data pump utility for faster import ?
    We don't have the 10g software installed in the HP-UNIX platform.
    And what other alternatives we have to reduce the amount of time for the migration process as it's a critical Production database ?
    Thanks,
    Jayanta

    I believe that Justin is correct that if you use exp to unload the data you will need to use imp and not impdp to reload the data.
    To speed up the cross-platform exp/imp process I suggest you consider running multiple concurrent exp/imp jobs. You can generate a table=list into a spool file using sql. You can give each large table its own exp job and bunch the small tables.
    Depending on how your database objects are organized you might also be albe to export by owner or a combination of owner, tables= exports.
    You can use a full=y with the rows=n option to grab the public synonyms, non-owning users, packages, etc.... not brought in by the prior exp/imp.
    HTH -- Mark D Powell --

  • Exp/imp

    Hi
    Regarding the schema level export/import.
    If I take a export of one schema, and import with standard exp/imp, I realized that user is not created and I have to create the user prior to import.
    However, if I take a export of one schema with datapump and import with impdp,
    I realized that user is also created and I dont need to create the user prior to import.
    What is the reason for this?

    if you don't want to create user prior during import then drop only the objects of that schema before refresh start

  • Exp/Import database schema vs Ch 10 Exp/Imp Content

    Hi I'm using Portal 10.1.2.02. I'm a DBA tasked with migrating a Dev portal into Test, then Production.
    There seems to be a number of ways to achieve this and I was hoping to clarify which is best for me.
    I've run through note: 330391.1(copying the Portal schema) which details running a perl script to export and import onto a newly installed server. This process worked will.
    Chapter 10 of the Portal guide details a fairly complex process of creating transports sets and migrating these over and importing.
    My question is: if I want the entire Portal copied is there any difference in these processes? ie. Do you end up with the same result?
    thanks in advance of any advice :-)

    The cloning model is not a rerunnable model, its not granular and conditional migraton is not possible.
    You can do the cloning only if u want to take a copy of the entire setup and rewire to a new midtier.
    Portal exp/imp model helps u to achieve granular, conditional and rerunnable method of moving portal objects.
    Apart from that, it comes with the readily available prechecks to intimate what's going wrong during the process.

  • Migrate oracle 9207 DB 8 TB size frm Solaris to AIX?dont want  Exp/Imp

    Hi Guys,PLz help
    I want migrate 8TB oracle database from Solaris 8 to AIX 5.
    In my last post on the same topic I was told to refer Metalink notes
    291024.1,Note:77523.1,Note:277650.1
    acc to these notes 'EXPORT/IMPORT IS THE ONLY OPTION TO MIGRATE FROM SOLARIS TO AIX'.
    I was not convinced as In http://dba.ipbhost.com/index.php?showtopic=9523
    I read
    "As both solaris and AIX are UNIX O/S, cloning the DB is also possible from source box to target box"
    Also
    whats the role of "If the endianness is the same"??????
    Can you guys plz comment on this again.....as I'm really confused because exp/imp of 8TB
    is going to take half of my life :)
    PLz tell in any other option in 9.2.0.7?If we have
    Thanx

    You can do
    SELECT PLATFORM_NAME, ENDIAN_FORMAT
          FROM V$TRANSPORTABLE_PLATFORMto check endianness of different platform.
    -- Note, the view is only on 10g and above.
    datafiles from different endian format can't be directly copied over and use. On 10g you have option to use RMAN convert endianness.
    with that said, since Solaris and AIX are all big engianness. That means you can directly copy datafiles over and clone database without using exp/imp.
    this article has a list of how to clone
    http://www.dba-oracle.com/oracle_tips_db_copy.htm

Maybe you are looking for

  • Error while updating customer data through IDOC.

    hi all, I am using the function module "IDOC_INPUT_DEBITOR" to update the customer master data but i was getting an error "editing was terminated".Please help to solve this problem. regards, Suresh.

  • What can I do if my account is hacked and apps are purchased?

    Hi, I rarely use my itunes account and I just logged in to purchase a song and noticed that my itunes account balance is next to nothing, When I looked up my previous purchases it appears someone had used my account and purchasd apps in a foreign lan

  • Open file in Photoshop CS4 with PDF open options

    Hi, I need to open Illustrator file in Photoshop with PDF open options and set the crop page options to media box. I am not sure how to do it. Here what I have that doesn't work: *open file "SPACE:Marketing:webTemplatesCreation:working:die_front.ai"

  • File to RFC Error com.sap.engine.interfaces.messaging.api.exception.Messagi

    Hi , I am doing File to RFC  Scenario. My file get picked and I able to view the audit log under RWB message monitoring. Here I am getting the error u201CTransmitting the message to endpoint dest://XI_INTEGRATION_SERVER using connection File_http://s

  • 4G LTE iPhone 5: Carrier Settings Update never came

    Hello, Just to say, before I start: This question is related to Switzerland's Swisscom's 4G LTE network and iPhone 5. I have updated to iOS 6.1 (for Swisscom 4G LTE compatibility, it's now compatible), but didn't get the carrier settings update, and