DBID for full import

Hi,
I would be doing full database export/import using datapump from Linux 64 bit to linux 32 bit.
I want the DBID to remain the same in target database.
Would DBID remain same?
Thanks

Hi,
As far as I'm aware you can't choose the dbid, you can force it to be changed but not what value to. Your only possible solution would be to restore the live database backup to the test box (without using rman clone i.e. either an rman restore as if the test box is the live box, or a straight copy of the files when live is in backup mode) - this would be a complete copy of live with the same dbid. I'm not 100% sure if the dbid will change if you just rename the database manually - so something like
create controlfile reuse set NEWDBNAME blah blah blah;
might work allowing you to rename the database from the restored copy of live but keep the dbid.
Generally though you don't want 2 database with the same dbid it's asking for trouble - i'd be looking to change any application logic that is checking it.
Cheers,
Harry

Similar Messages

  • Best choice for exporting / importing EUL

    Hi all
    I have been tasked with migrating an EUL from a R11 to a R12 environment. The Discoverer version on both environments is 10.1.2 and the OS is Solaris on oracle db's.
    I am unfortunately not experienced with Discoverer and there seems to be no one available to assist for various reasons. So I have been reading the manual and forum posts and viewing metalink articles.
    I tried exporting the entire eul via the wizard and then importing it to the new environment but i was not successfull and experienced the system hanging for many hours with a white screen and the log file just ended.
    I assumed this was a memory problem or slow network issues causing this delay. Someone suggested I export import the EUL in pieces and this seemed to be effective but I got missing item warnings when trying to open reports. This piece meal approach also worried me regarding consistency.
    So I decided to try do the full import on the server to try negate the first problem I experienced. Due to the clients security policies I am not able to open the source eul and send it to our dev. I was able to get it from their dev 11 system but I dismissed this as the dev reports were not working and the only reliable eul is the Prod one. I managed to get a prod eex file from a client resource but the upload to my server was extremely slow.
    I asked the dba to assit with the third option of exporting a db dump of the eul_us and importing this into my r12 dev environment. I managed this but had to export the db file using sys which alleviated a priviledge problem when logging in, I have reports that run and my user can see the reports but there are reports that were not shared to sysadmin in the source enviroment are now prefixed with the version 11 user_id in my desktop and the user cannot see her reports only the sysadmin ones.
    I refreshed the BA's using a shell script I made up which uses the java cmd with parameters.
    After some re reading I tried selecting all the options in the validate menu and refreshing in the discover admin tool.
    If I validate and refresh the BA using the admin tool I get the hanging screen and a lot of warnings that items are missing( so much for my java cmd refresh!) and now the report will not open and I see the substitute missing item dialogue boxes.
    My question to the forum is which would be the best approach to migrate the entire eul from a R11 instance to a R12 instance in these circumstances?
    Many thanks
    Regards
    Nick

    Hi Srini
    The os and db details are as follows:
    Source:
    eBus 11.5.2
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    SunOS 5.10 Generic_142900-11 sun4u sparc SUNW,Sun-Fire-V890
    Target:
    ebus 12.1.2
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production DEV12
    SunOS 5.10 Generic_142900-11 sun4u sparc SUNW,Sun-Fire-V890
    Yes the DBA initially did an exp for me using EUL_US as the owner but a strange thing happened with privileges and also, some of the imported tables appeared in the target environment under the apps schema(21 tables) even though the eul_us exp had been 48 tables.
    I also had a problem on the db with "eul_us has insufficient privileges on table space discoverer" type errors.
    I checked the eul_us db privileges and was unable to resolve this initial privilege error even though the privileges were granted to eul_us.
    The dba managed to exp as system and then import it with the full=y flag in the import command which seems to bring in the privileges.
    Then I ran the eul5_id.sql and then made up a list of the business areas and made a sh script to refresh the business areas as follows:
    java -jar eulbuilder.jar -connect sysadmin/oracle1@dev -apps_user -apps_responsibility "System Administrator" -refresh_business_area "ABM Activities" -log refresh.log
    This runs successfully and I can log in select business area and grant access to the users. The reports return data.
    Then one of the users said she can't see all her reports. I noticed some if I opened desktop that were sitting there prefixed with a hash and her version 11 user id.
    So back to the manuals and in the disco admin help the instructions are to first go to view > validate > select all options then go to the business area and click file refresh. This gives me a lot of warnings about items that are missing. I assume this is because the item identifiers brought across in the db dump are the version 11 ones and thus not found in the new system.
    Any suggestions?
    Many thanks
    Nick

  • Getting app-store-import-exception with FIM-MA Full Imports SQL Timeouts

    Hi,
    Any idea whats going on here? Full Import on FIM-MA leads to app-store-import-exception.
    Here is a quick profile of the situation:
    FIM Sync, FIM Service & FIM Portal all are instaled on the same machine with 24GB RAM hosted as a Virtual Machine.
    FIM-MA Full Import leads to app-store-import-exception
    Event log reports "Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding"
    Here is a screen shot: http://www.freeimagehosting.net/image.php?e5570db7f6.jpg
    As you can see, FIM-MA imported close to 118500 objects before timing out.
    Here are the things I have tried:
    Extended FIM timeout on the services as per
    Darryl's Blog
    Provided FIM MA's password as per
    Ahmed's suggestion
    Restricted SQL Service Max RAM to 15 GB (Machine has 24GB) as per
    David Lundell's suggestion
    Thanks for your help in advance.
    Thanks,
    Here is the Application Event Log
    Log Name: Application
    Source: FIMSynchronizationService
    Date: 1/6/2011 11:29:00 AM
    Event ID: 6500
    Task Category: None
    Level: Error
    Keywords: Classic
    User: N/A
    Computer: fimsrv.fimdev.pvt
    Description:
    The description for Event ID 6500 from source FIMSynchronizationService cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    There is an error executing ILM MA full import.
    Type: System.Data.SqlClient.SqlException
    Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
    Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
    at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
    at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
    at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()
    at System.Data.SqlClient.SqlDataReader.get_MetaData()
    at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
    at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)
    at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)
    at System.Data.SqlClient.SqlCommand.ExecuteReader()
    at Microsoft.ResourceManagement.Data.Sync.FullImportGetNext(Int64 beginObjectKey, Int64 maxObjectKey, Int32 batchSize)
    at MIIS.ManagementAgent.RavenMA.FullImportGetNextBatch(Int64 maxObjectKey, Int32 batchSize)
    the message resource is present but the message is not found in the string/message table
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    <System>
    <Provider Name="FIMSynchronizationService" />
    <EventID Qualifiers="0">6500</EventID>
    <Level>2</Level>
    <Task>0</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2011-01-06T17:29:00.000000000Z" />
    <EventRecordID>1582399</EventRecordID>
    <Channel>Application</Channel>
    <Computer>fimsrv.fimdev.pvt</Computer>
    <Security />
    </System>
    <EventData>
    <Data>There is an error executing ILM MA full import.
    Type: System.Data.SqlClient.SqlException
    Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
    Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
    at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
    at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
    at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()
    at System.Data.SqlClient.SqlDataReader.get_MetaData()
    at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
    at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)
    at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)
    at System.Data.SqlClient.SqlCommand.ExecuteReader()
    at Microsoft.ResourceManagement.Data.Sync.FullImportGetNext(Int64 beginObjectKey, Int64 maxObjectKey, Int32 batchSize)
    at MIIS.ManagementAgent.RavenMA.FullImportGetNextBatch(Int64 maxObjectKey, Int32 batchSize)</Data>
    </EventData>
    </Event>
    Thanks,
    Jameel Syed |
    Identity & Security Strategist | [email protected] |
    Simplified Identity and Access Management

    That is unusual. I suggest checking for database integrity and peforming index maintenance, on both the FIMService and FIMSync databases.
    You could also increase the following timeout (see link below for more settings)
    The values in Table 17 are located in the registry key: SOFTWARE\Microsoft\Forefront Identity Manager\2010\Synchronization Service.
    Table 17
    Registry value name
    Value type
    Class
    Created by
    Notes
    ReadTimeOut
    <dword>
    HKLM
    Admin
    The default value is 58, specified in seconds.
    Note
    Only used by the management agent for FIM (FIM MA) for reading from the FIM Service data base.
    http://technet.microsoft.com/en-us/library/ff800821(WS.10).aspx
    David Lundell, Get your copy of FIM Best Practices Volume 1 http://blog.ilmbestpractices.com/2010/08/book-is-here-fim-best-practices-volume.html

  • What are the best dimensions to allow for iPad 3 Retina for full-page landscape images?

    What are the best dimensions to allow for iPad 3 Retina for full-page landscape images?
    I read an article and cannot find it that said something about trying to make it 2million pixels and JPG to keep it small, but what exactly should the dimensions be that I ultimately import?  Is there some kind of a "density" setting I have to use, as well, when exporting from Photoshop, for example?

    See Optimizing performance in your iBooks Author books
    Density is discussed by Apple in this context as 'dpi'...
    132 - iPad 2
    264 - new iPad
    At the top of this forum there are popular links listed on the right, including image sizing etc. Be sure to study those existing/previous threads on this topic.
    Good luck.
    Ken

  • What is the best practice for full browser video to achieve the highest quality?

    I'd like to get your thoughts on the best way to deliver full-browser (scale to the size of the browser window) video. I'm skilled in the creation of the content but learning to make the most out of Flash CS5 and would love to hear what you would suggest.
    Most of the tutorials I can find on full browser/scalable video are for earlier versions of Flash; what is the best practice today? Best resolution/format for the video?
    If there is an Adobe guide to this I'm happy to eat humble pie if someone can redirect me to it; I'm using CS5 Production Premium.
    I like the full screen video effect they have on the "Sounds of pertussis" web-site; this is exactly what I'm trying to create but I'm not sure what is the best way to approach it - any hints/tips you can offer would be great?
    Thanks in advance!

    Use the little squares over your video to mask the quality. Sounds of Pertussis is not full screen video, but rather full stage. Which is easier to work with since all the controls and other assets stay on screen. You set up your html file to allow full screen. Then bring in your video (netstream or flvPlayback component) and scale that to the full size of your stage  (since in this case it's basically the background) . I made a quickie demo here. (The video is from a cheapo SD consumer camera, so pretty poor quality to start.)
    In AS3 is would look something like
    import flash.display.Loader;
    import flash.net.URLRequest;
    import flash.display.Bitmap;
    import flash.display.BitmapData;
    import flash.ui.Mouse;
    import flash.events.Event;
    import flash.events.MouseEvent;
    import flash.display.StageDisplayState;
    stage.align = StageAlign.TOP_LEFT;
    stage.scaleMode = StageScaleMode.NO_SCALE;
    // determine current stage size
    var sw:int = int(stage.stageWidth);
    var sh:int = int(stage.stageHeight);
    // load video
    var nc:NetConnection = new NetConnection();
    nc.connect(null);
    var ns:NetStream = new NetStream(nc);
    var vid:Video = new Video(656, 480); // size off video
    this.addChildAt(vid, 0);
    vid.attachNetStream(ns);
    //path to your video_file
    ns.play("content/GS.f4v"); 
    var netClient:Object = new Object();
    ns.client = netClient;
    // add listener for resizing of the stage so we can scale our assets
    stage.addEventListener(Event.RESIZE, resizeHandler);
    stage.dispatchEvent(new Event(Event.RESIZE));
    function resizeHandler(e:Event = null):void
    // determine current stage size
        var sw:int = stage.stageWidth;
        var sh:int = stage.stageHeight;
    // scale video size depending on stage size
        vid.width = sw;
        vid.height = sh;
    // Don't scale video smaller than certain size
        if (vid.height < 480)
        vid.height = 480;
        if (vid.width < 656)
        vid.width = 656;
    // choose the smaller scale property (x or y) and match the other to it so the size is proportional;
        (vid.scaleX > vid.scaleY) ? vid.scaleY = vid.scaleX : vid.scaleX = vid.scaleY;
    // add event listener for full screen button
    fullScreenStage_mc.buttonMode = true;
    fullScreenStage_mc.mouseChildren = false;
    fullScreenStage_mc.addEventListener(MouseEvent.CLICK, goFullStage, false, 0, true);
    function goFullStage(event:MouseEvent):void
        //vid.fullScreenTakeOver = false; // keeps flvPlayer component from becoming full screen if you use it instead  
        if (stage.displayState == StageDisplayState.NORMAL)
            stage.displayState=StageDisplayState.FULL_SCREEN;
        else
            stage.displayState=StageDisplayState.NORMAL;

  • Full Import from 8.1.5 to 9.2 errors

    Hi Guys
    We are running with WinNT SP6, I created a 9.2 database, carried out a full import from a 8.1.5 database, no problems.
    When I try to do a full import again into the 9.2 database, I get
    IMP-00019: row rejected due to ORACLE error 1
    IMP-00003: Oracle error 1 encountered
    ORA-00001: unique constraint (gs.table_name) violated
    I'm using FULL=Y IGNORE=Y DESTROY=N (as I did with the original successful full import into the same 9.2 database)
    I just want to be able to update the entire database with a more recent version of the data on the 8.1.5 database.
    What do I need to set to avoid these errors and import the new dmp file successfully?
    Can I just re-run the full import again on top of the 9.2 database after the unsuccessful full import?
    Many thanks in advance for any speedy responses!
    Cheers
    Ciaran

    The unique constraint violation happens because you have data in your 9.2 database. You are attempting to insert the same records again from your new export.
    To refresh the data in your 9.2 instance with more recent data from the 8.1 instance, you will need to truncate (or drop) all of the tables in the 9.2 instance before doing your import. I would also suggest that if you truncate the tables, you also drop all indexes, constraints and triggers. These can be re-built by the import, or manually from scripts after the import.
    TTFN
    John

  • Event 109 with full import from cloud

    After installing the AAD Connection Tool we find the step which does a full import from the cloud errors with stopped-extension-dll. This generates 3 error logs in Event Viewer 109 - 6801 - 6803.
    109 -
    Failure while importing entries from Windows Azure Active Directory. Exception: System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list.
    6801 -
    The extensible extension returned an unsupported error.
    The stack trace is:
    "System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list.
    6803 -
    The management agent "xxxxx.onmicrosoft.com - AAD" failed on run profile "Full Import" because the server encountered errors.
    Researching suggests an expired cloud account password, but I have verified this is valid. I have also reinstalled/reconfigured the product several times.
    I need advice for getting the Full Import to work.
    Adam

    This worked for me:
    Please choose the Azure Connector in Synchronization Service Manager tool, in there properties - Select Object Types - please add "device" to add the selected object ( just set the hook ) . Then again start the
    Delta or Full import, please.
    If is any Device entry in the AZURE => for whatever reason => this is the only way to sync in the momment with the AAD Sync Tool at this time.
    Explanation:
    The Delta Import terminates because
    the online tenant objects are of type
    "device" available
    We run the import to fail because
    the design of the Azure Connector does not include
    this type
    of  object
    Unfortunately, the connector does
    not in this case but breaks with this
    error message
    The current solution is then
    the object type "device" for the
    Azure connector to add to take
    In a later version of AADSync this is
    probably revised, the "device"
    does not have to be selected from then on
    the object type, but that
    the connector will work here and does not end
    with this error message
    good luck

  • Oracle automatic statistics optimizer job is not running after full import

    Hi All,
    I did a full import in our QA database, import was successful, however GATHER_STATS_JOB is not running after sep 18 2010 though its enable and scheduled, i did query last_analyzed table to check and its confirmed that it didnt ran after sep18,2010.
    Please refer below for the output
    OWNER JOB_NAME ENABL STATE START_DATE END_DATE LAST_START_DATE NEXT_RUN_D
    SYS GATHER_STATS_JOB TRUE SCHEDULED 18-09-2010 06:00:02
    Oracle defined automatic optimizer statistics collection job
    =======
    SQL> select OWNER,JOB_NAME,STATUS,REQ_START_DATE,
    to_char(ACTUAL_START_DATE, 'dd-mm-yyyy HH24:MI:SS') ACTUAL_START_DATE,RUN_DURATION
    from dba_scheduler_job_run_details where
    job_name='GATHER_STATS_JOB' order by ACTUAL_START_DATE asc; 2 3 4
    OWNER JOB_NAME STATUS REQ_START_DATE ACTUAL_START_DATE
    RUN_DURATION
    SYS GATHER_STATS_JOB SUCCEEDED 16-09-2010 22:00:00
    +000 00:00:22
    SYS GATHER_STATS_JOB SUCCEEDED 17-09-2010 22:00:02
    +000 00:00:18
    SYS GATHER_STATS_JOB SUCCEEDED 18-09-2010 06:00:02
    +000 00:00:26
    What could be the reason for GATHER_STATS_JOB job not running although its set to auto
    SQL> select dbms_stats.get_param('AUTOSTATS_TARGET') from dual;
    DBMS_STATS.GET_PARAM('AUTOSTATS_TARGET')
    AUTO
    Does anybody has this kind of experience, please share
    Apprecitate your responses
    Regards
    srh

    ?So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run, but i see tables are updated still the job is not running. I did >query dba_scheduler_jobs and the state of the job is true and scheduled. Please see my previous post on the output
    Am i missing anything here, do i look for some parameters settings
    So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run,GATHER_STATS_JOB will run and if there are any table in which there's a 10 percent change in data, it will gather statistics on that table. If no table data have changes less than 10 percent, it will not gather statistics.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41282
    Hope this helps.
    -Anantha

  • Full import error : "user does not exists"

    Hi all,
    I had 5 separated windows based servers (32bit) with oracle database 9i. I decided to centralize them into one server. So, I configured a new Linux (64bit) and install an Oracle 10gR2. Then I configured a starter database with dbca. I prepared a full dump file from one of old servers and successfully full imported it into starter database on new Linux (10g) server. After that I configured another database using dbca. But when I wanted to full import the previous dump file into new database I have got the error "user does not exists" regarding some demo schema and import terminated unsuccessfully.
    I decided to create users regarding the errors and restart the import. After each user creation I was getting a new error for a new user account.
    My main question is why first time import terminated successfully without any error but second one in new database raised error?
    Thank
    Iman
    Edited by: Iman.Jam on Aug 25, 2009 9:38 AM

    Hi yingkuan,
    imp full=y file=.... log=..... userid=system/manager@orcl
    ALTER SESSION SET CURRENT_USER = "HR"
    ORA-.... : user does not exists
    And I know that import will create users that are not exists. Actually, all my specified schema are created but not some of accounts
    Regards,
    Iman
    Edited by: Iman.Jam on Aug 25, 2009 9:59 AM

  • Best Practice for full motion

    I'm new to this whole publishing video to the web thing and
    I'm quickly learning that full motion recording is a bit large to
    deliver via the web. So I guess I'm asking for advice on how to
    show/demonstrate a full motion action in the most efficient manner
    possible. When I import the created SWF from Captivate into Flash
    8, it becomes blocky (black pixel looking blocks) and does not
    export in a usable fashion. I think it's from changing the frame
    rate from 30 to 10.
    It is possible to record at a lower frame rate for full
    motion? If not, then is there a better compression scheme that I
    can be using to better deliver the video?
    Any help is greatly appreciated. Thank you!

    http://www.macromedia.com/devnet/flash/articles/flash_to_video.html
    ~~~~~~~~~~~~~~~~
    --> Adobe Certified Expert
    --> www.mudbubble.com
    --> www.keyframer.com
    ~~~~~~~~~~~~~~~~
    shadeland wrote:
    > As I start my venture into a web cartoon we are starting
    I am having issues
    > with keeping sounds and video together for a video. I
    have tried almost
    > everything but I would just like to know what the best
    practice is for getting
    > a cartoon to full motion video. I am very good with
    actionscripting, writing
    > games and apps, but I am struggling to get my content
    into full motion video
    > with audio. Please help!
    >

  • DBMS_CRYPTO grant missing after a full import, is this expected?

    Folks,
    We just did a full export of a production database using expdp.
    $ expdp directory=DATA_PUMP_DIR dumpfile=Full.dmp logfile=Full.log metrics=Y full=Y
    We then did an import on another system using impdp.
    $ impdp directory=DATA_PUMP_DIR dumpfile=Full.dmp
    Both DBs are 11gR2. 11.2.0.1 on the source system, 11.2.0.3 on the target system.
    While testing one of our applications we noticed that a couple of functions didn't compile for one of our applications. Those functions referenced DBMS_CRYPTO.
    Further testing revealed that our main application user was missing grant execute privs on DBMS_CRYPTO. After granting execute on DBMS_CRYPTO to the application user, everything worked fine.
    All other grants (and there are a lot of them) came across just fine (as far as we can tell so far).
    Is there a reason why a grant on DBMS_CRYPTO would get 'missed' during an import?
    Thanks,
    Rich

    For schema level import, there could be some privileges missing, check:
    Missing Object Level Grants After Data Pump Schema Level Import [ID 795784.1]
    But your case, full import. Can you generate sqlfile with only grants and check whether that privilege is there in dump file:
    impdp full=y dumpfile=Full.dmp INCLUDE=GRANT directory=DATA_PUMP_DIR

  • Need help for full db exp-imp

    Hi,
    My database is having undo segment corruption.I have considered and tried a lot of things to come out of the situation,but didn't get any +ve result. So I have decided to take a full database export,rename the database,create a new database on the same system and import the full database in the new database.Will you plz tell me the steps to do this(full db import)? Plz include a wise syntax for full db exp-imp.
    Thanks a lot

    If your only identified problem was sudden undo corruption to an online instance and you followed the steps referenced on the Burleson site (switch to manual undo management, create a new undo tablespace, drop the old undo tablespace, and switch back to automatic undo management) and then the very next day encountered undo corruption again, I would recommend two simultaneous courses of action:
    1) Get the hardware analyzed, this could be a signal of hard drive or controller problems.
    2) Scour metalink for any bug references that may be related (or just go ahead and open an iTar).
    If you persist in importing to create the database back on this same system and if you must use the same storage, I would at least try to get the file system or volume that the corruption was on taken offline and recreated.
    Just an alternate view of things ... good luck!

  • Connector Space Object don't match Full Import Object Count

    Just want to clear up something
    I have an HR system that removed about a 100 users. My anchor attribute is
    Employee_Person_ID (e.g. 12345). My join rule is
    EmployeeID (e.g. 123456789) 
    So HR removed the records and if I do a Full Import I see the correct amount of objects e.g. 1000. I confirm this my doing a database query and I cannot find the delete objects. If I do a search on the connected space objects I get 1100 user
    records.
    Problem 1: I still see the old objects in the connected space and they still have a connector to the MV object.
    Problem 2: 
    HR re-added them with new records. The EmployeeID stays the same (e.g. 123456789) but the Employee_Person_ID changed to (e.g. 54321).
    Now I get ambiguous-import-flow-from-multiple-connectors errors caused by problem 1.

    I agree with your statements and this is what I also seen on other environments. 
    The object must show up as connector (after an import) - Yes, there is a connector
    When you run a sync, the object is removed. No, this is not happening. CS object count does not go down. No deletion shown.
    When the object is removed, the link to the metaverse object is removed.
    No this is not happening. Object still present in the CS and has a connector in the MV object.
    The removal of the link triggers the object deletion rule on the metaverse object.
    No, I cannot see this happening
    Just making sure, are you saying that the "deleted object" still shows up as connector AFTER running a sync? Yes!
    Summary
    RecordNo
    EmployeeNo Details
    ABC
    1234567
    Original Record with a connector to the MV object. This records gets deleted and confirm with direct DB query. FIFS does
    not show a object delete on the MA. CS search still shows object.
    DEF 1234567
    New record arrives with ambiguous-import-flow-from-multiple-connectors error. The only why I can get rid of
    the error for now is to make the old record a explicit disconnector. Even after this the old record still shows up in the CS search.

  • Restoring all Users after migration to new environment without Full Import

    Hallow all,
    We are migrating our database from windows to Sun Solaris.
    We came to situation that, we have lots of created users who do there transactions on other users tables. Now I don't want to do full import & let oracle do all the work for me. Cause we will change the structure of tablespaces & the default tablespaces for users.
    Now we came to problem,
    We want to create all the users without asking everyone to change his password, if we put default password for all.
    I want your opinion about if the following will work:
    1- I create the new tablespaces.
    2- I create the main users & assign there default tablespaces.
    3- on the old database I create a table
    create table user_list as select * from dba_users;
    4- I import the needed main users
    5- I import user_list table into system account;
    6- I write procedure with cursor on user_list then create users while looping on that cursor with any password.
    7- I write procedure with cursor on SYS.USER$ then
    alter user User_Name identified by values 'F894844C34402B67';
    here the Hashed value will be selected from the User_list table where name=name on both tables.
    What you think about the above steps, especially No. 7 ?
    any other suggestions to do the work ?
    Best Regards,
    Naji Ghanim

    Hi ,
    Spool the create users scripts and run in new db.
    select
    'PROMPT Create User '||username||'...' text,
    'create user "'||username||'" identified '||
    decode(PASSWORD, 'EXTERNAL','externally ','by values '''||PASSWORD||''' ')||
    decode(DEFAULT_TABLESPACE,'SYSTEM',null,' default tablespace "'||DEFAULT_TABLESPACE||'" ')||
    decode(TEMPORARY_TABLESPACE,'SYSTEM',null,' temporary tablespace "'||TEMPORARY_TABLESPACE||'" ')||
    decode(profile,'DEFAULT', null, ' profile "'||PROFILE||'" ')||';' text
    from dba_users
    where username not in ('ADAMS','ANONYMOUS','BLAKE','CLARK','CTXSYS','DBSNMP','DIP','DMSYS','EXFSYS','HR','JONES','MDDATA','MDSYS','MGMT_VIEW','ODM','ODM_MTR','OE','OLAPSYS','ORDPLUGINS','ORDSYS','OUTLN','PERFSTAT','PM','QS','QS_ADM','QS_CB','QS_CBADM','QS_CS','QS_ES','QS_OS','QS_WS','SCOTT','SH','SI_INFORMTN_SCHEMA','SYS','SYSTEM','TRACESVR','TSMSYS','WKPROXY','WKSYS','WK_TEST','WMSYS','XDB')
    You can modify the create user script for the new tablespaces.
    HTH,
    Thomas.

  • Long execution for Full optimize?

    Hi!
    Do anyone know how long a normal executiontion time for full optimize is? And what you can do to speed it up?
    We have an application with about 5 000 000 records in the Fact table. We have scheduled 8 imports every day which creates about 100 000 records for every import in the Fac2 table. And we have sheduled one full optimize every night and this optimize takes about 45 minutes, is this normal for this number of records in the tables.

    That's about how long our optimizes take with our 9m record fact table.
    The fewer the records, the shorter the optimize so you can look into partitioning and archiving if you would like to shorten the optimize time.

Maybe you are looking for

  • JBCL 2.0 and Oracle 8.0.5 Objects Option

    I4m working with JDeveloper 3.1.1.2 using JBCL components and Data Express to connect to an Oracle 8.0.5 database with Objects Option. I use a valid SQL like : SELECT C.DENOMINACION AS DENOMINACION, C.OBSERVACIONES AS OBSERVACIONES, C.FECHA AS FECHA,

  • Illustrator Crashes on Open in OSX Lion 10.7.2

    Hi, I have Adobe Illustrator that I purchased with the Creative Suite Standard (version 5.1). The Illustrator logo starts bouncing in the Dock, but then disappears and the below crash report appears. I didn't have this problem on my Macbook Pro, but

  • Weblogic 10.0 web application with CLIENT-CERT suddenly redirect with 401

    Hi everybody, we currently have a Weblogic Portal 10.2 web application with an integrated Windows authentication. I configured a Negociate Identity Asserter and an Active Directory provider. I configure Kerberos services, so we have succefully access

  • Recource scheduling table for customer service

    Hello, does anyone of you use a alternative Recource scheduling table for customer service, beside the SAP integrated Recource scheduling table ? Are there other recource scheduling table known ? Jürgen

  • Compiler thinks getDeclaredFields method is in a static context

    Hi folks, I have the following lines of code at the beginning of a non-static method ; public void streamOutput(Object objectToStream, boolean autoFlush) Class objectClass = objectToStream.getClass(); Field[] fieldArray = Class.getDeclaredFields(); W