Changing udump, *dump directories in 10g

Greetings folks,
I just (again) cleared dump directories of our 10g databases. Unfortunately, the dump directories for one db were configured to be on /root directory (/opt/oracle) on a Suse Linux Enterprise Server 10. Because of thet we are now running into trouble because the /root directory is kept small. Is it possible to change the user dump directories (adump, bdump, udump ...) without installing the whole db again? If yes, how? I would appreciate some comments on this.
Thank you very much
Jörn

user10850980 wrote:
Hi,
Do we have to restart the instance or this change is dynamic?.
Regards
GaniYou do realize you are asking a question in a thread that is over two years old?
You do not need to restart the instance. This is documented in the Reference Manual. Whenever someone shows me a parameter or a command that I was not familiar with, the very first thing I do is go to the appropriate reference manual, look up the command and/or parameter, and read up on it.
Learning how to look things up in the documentation is time well spent investing in your career. To that end, you should drop everything else you are doing and do the following:
Go to tahiti.oracle.com.
Drill down to your product and version.
<b><i><u>BOOKMARK THIS LOCATION</u></i></b>
Spend a few minutes just getting familiar with what is available here. Take special note of the "books" and "search" tabs. Under the "books" tab you will find the complete documentation library.
Spend a few minutes just getting familiar with what <b><i><u>kind</u></i></b> of documentation is available there by simply browsing the titles under the "Books" tab.
Open the Reference Manual and spend a few minutes looking through the table of contents to get familiar with what <b><i><u>kind</u></i></b> of information is available there.
Do the same with the SQL Reference Manual.
Do the same with the Utilities manual.
You don't have to read the above in depth. They are <b><i><u>reference</b></i></u> manuals. Just get familiar with <b><i><u>what</b></i></u> is there to <b><i><u>be</b></i></u> referenced. Ninety percent of the questions asked on this forum can be answered in less than 5 minutes by simply searching one of the above manuals.
Then set yourself a plan to dig deeper.
- Read a chapter a day from the Concepts Manual.
- Take a look in your alert log. One of the first things listed at startup is the initialization parms with non-default values. Read up on each one of them in the Reference Manual.
- Take a look at your listener.ora, tnsnames.ora, and sqlnet.ora files. Go to the Network Administrators manual and read up on everything you see in those files.
- When you have finished reading the Concepts Manual, do it again.
Give a man a fish and he eats for a day. Teach a man to fish and he eats for a lifetime.
Edited by: EdStevens on Nov 6, 2010 1:54 PM

Similar Messages

  • Admin and dump directories for oracle RAC

    I have conflicting environments running RAC on Solaris Sparc. We have a shared file system that contains the dump directories for both instance in the cluster.
    both instance are writing to the same directorie for the alert log and other trace files. alert_<sid1>.log and alert_<sid2>.log are in the same directory. Should these be seperated on put these directories on local file system on the nodes? My instance is crashing on a regular basis, just wondering if this could cause a problem.
    thanks in advance.

    afaik , the ora-00600[12333] could'nt be crashing the instance but rather indicates a communication mismatch between the
    client and the server or a networking issue .. Usually those are ignorable ..
    however the other error you indicated ora-00600 [kclfadd_1] could be responsible for Instance crashes ..
    Looks like this issue only affects 64 bit versions of Oracle and this affects Oracle 10.2.0.2 ..
    one-off patches are available for this issue on Linux , HP Tru64 , AIX , Solaris SPARC and Solaris x86 { all 64 bit platforms only }
    This issue is an Oracle bug and caused due to corruptions in the cache element structures used by LMS's for cache fusion ..
    These crashes might have nothing to do with the Shared bdump / udump locations you have ..
    My advice would be to apply the one-off patch if it's available for your platform on 10.2.0.2 or upgrade to 10.2.0.[3/4]
    where this bug has been fixed .
    Vishwa

  • How to change SGA and PGA in 10g r2 RAC

    Hi,
    How to change SGA and PGA in 10g r2 RAC, its Linux.

    Hi,
    Here is the way i followed to change SGA and PGA in RAC.
    Action Plan to change the memory parameter in the production environment(5/21/2010)
    Note:
    =====
    Practice should be done in the DEV/TST environment before moving to Production;
    Although the procedure could be applied in the testing environment the changing values
    used in the testing environment should be modified since RAM number are not the
    same between the testing servers and production servers.
    1.     Make changes to these two parameters in one of
         the prod instance, e.g. PROD1
         ====================================================
         login as sysdba on sqlplus
    Change :
    alter system set sga_target=28G scope=spfile sid='*';
    Change :
    alter system set pga_aggregate_target=4G scope =spfile sid='*';
         sql>alter system set pga_aggregate_target=4G scope=both;
         sql>show parameter pga_aggregate_target;      
              -- should see the altered number on both instances
    Change:
    sql>alter system set sga_max_size=28G scope=spfile sid='*';
    sql>alter system set sga_max_size=28G scope=spfile ;
    sql>alter system set sga_target=28G scope=spfile;
    3.     shutdown database PROD (two instances should be shut down)
         $>srvctl stop database -d PROD
         $>. /$ORA_CRS_HOME/bin/crs_stat -- to check for the database status
    4.     bring up database PROD (two instance should be brought up)
         $>srvctl START database -d PROD
         $>. /$ORA_CRS_HOME/bin/crs_stat -- to check for the database status
         login into both instances as sysdba to check
         sql>show parameter pga_aggregate_target
         sql>show parameter sga_max_size
              -- should see the altered number still there in both instances
              -- from step 1
         -- After confirming above two parameters are altered on both instances,
         -- then issue the syntax below in one instance
    5)     check everything is ok

  • Clean dump directories

    Can someone tell me the command to remove a bunch of trace files from dump directories until a particular date.
    Thank you

    864312 wrote:
    Can someone tell me the command to remove a bunch of trace files from dump directories until a particular date.
    Thank youSince that would be an OS command, the answer would depend on the OS, which you didn't think was important enough to show us.
    It will be some variant of the basic file delete command for your un-named OS. Google is your friend for finding such.

  • Create data dump in Oracle 10g

    Hi All,
    Can somebody please tell me how to create a data dump and then import the database structure and data from an existing database?
    The data dump would consist of the complete database structure, i suppose.
    I am using Oracle 10g Rel 1.
    Thanks in Advance,
    Tanuja

    Note, neither classical export nor datapump is really a database backup. You only perform a logical data extraction,but it's difficult to recreate a lost database based only on export dumps. Additionally you will have loss of data,because you only can restore the data as it was,when you did the export. Changes made later are lost. See export/import as an add-on,but backup your database physically. The best way to do this is (in my opinion) RMAN.
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14193/toc.htm
    Werner

  • Changes to data verification between 10g and 11g Clients

    The 10g and 11.1.0.7 Databases are currently set to AL32UTF8.
    In each database there is a VARCHAR2 field used to store data, but not specifically AL32UTF8 data but encrypted data.
    Using the 10g Client to connect to either the 10g database or 11g database it works fine.
    Using the 11.1.0.7 Client to go against either the 10g or 11g database and it produces the error: ORA-29275: partial multibyte character
    What has changed?
    Was it considered a Bug in 10g because it allowed this behavior and now 11g is operating correctly?

    jfedynic wrote:
    The 10g and 11.1.0.7 Databases are currently set to AL32UTF8.
    In each database there is a VARCHAR2 field used to store data, but not specifically AL32UTF8 data but encrypted data.
    Using the 10g Client to connect to either the 10g database or 11g database it works fine.
    Using the 11.1.0.7 Client to go against either the 10g or 11g database and it produces the error: ORA-29275: partial multibyte character
    What has changed?
    Was it considered a Bug in 10g because it allowed this behavior and now 11g is operating correctly?
    29275, 00000, "partial multibyte character"
    // *Cause:  The requested read operation could not complete because a partial
    //          multibyte character was found at the end of the input.
    // *Action: Ensure that the complete multibyte character is sent from the
    //          remote server and retry the operation. Or read the partial
    //          multibyte character as RAW.It appears to me a bug got fixed.

  • Changing HOST Name in Oracle 10G AS

    Hi,
    I am using "Oracle 10g Application Server". I have installed oracle 10G AS in Windows 2003 Server(32-bit) ,2G RAM
    Machine is UTI ...Every Thing is Working Fine ..... After the Successful Installation of 10G AS Suddenly i got requirement to the Change host (UTI to UTTI)
    so i have changed the host Name to UTTI after that Oracle 10G AS is not Working , i know that every thing is registered with host name as UTI
    is it any possible Solution to make it Working Fine
    Any Advice on this .
    Thanks for Your Valuable Information
    SHAN

    user609980 wrote:
    I am using "Oracle 10g Application Server". I have installed oracle 10G AS in Windows 2003 Server(32-bit) ,2G RAM Good question. One that is appropriate for the App Server forums, not the Database forums.
    And answered in the docs.
    Which docs? Well, that depends on the VERSION of the app server. Not the BRAND as you specified. (Assuming 10g R2, 10.1.2.x, you would look under App Server 10g R2 docs at http://tahiti.oracle.com, click on the System Management tab, and look at Chapter 8 of the "Oracle Application Server >> Administrator's Guide"

  • Import Dump Error from 10g to 11G Database

    Hi
    We are installing dump from 10g DB to 11g DB. We have received dump file in tape and trying to import with imp command.
    imp system/sys@test11g file=/dev/st0 full=y  fromuser=x touser=x
    However, it gives error "IMP-00037: Character set marker unknown". Please advice on it also let us know things to take care while installing 10g dump into 11g DB.
    Thanks in advance.

    Hi All
    We have checked with client and have export log , dump was exported with expdp command. Client had sent dump in tap drive. So, we have tried with impdp command
    impdp system/sys@test11g SCHEMAS=test DIRECTORY=data_pump_dir LOGFILE=schemas.log
    I have following questions ->
    1) With impdp it is always require to have oracle directory object to point dump file?
    2) Can i create oracle direcotry with tap device path i.e /dev/st01
    For now, i have created oracle direcotry with tap device path. However above command is giving error  as below :
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 536
    ORA-29283: invalid file operation
    Please let me know how to do import dump with impdp command with exported file in tap drive.
    Thanks in advance

  • Import Oracle 9i dump into Oracle 10g

    Hi All,
    I want to know how do we import oracle 9i dump in to Oracle 10g
    Kindly inform us how it can be done
    Thanks
    Pandianaj

    Hi,
    In order to move data from 9i to 10g:
    o use exp version 9i
    o use imp version 10g
    From documentation:
    Exporting Data From Releases Earlier Than 10.2 and Importing Into Release 10.2
    Export From      Import To      Use Export Utility For      Use Import Utility For
    Release 10.1      Release 10.2      Release 10.1           Release 10.2
    Release 9.2      Release 10.2      Release 9.2           Release 10.2
    Release 8.1.7      Release 10.2      Release 8.1.7           Release 10.2
    Release 8.0.6      Release 10.2      Release 8.0.6           Release 10.2
    Release 7.3.4      Release 10.2      Release 7.3.4           Release 10.2
    ------------------------------------------------------------------------------Cheers
    Legatti

  • Changing the port number in 10g Discoverer- Middle Tier & Infrasctructural

    Earlier I had insalled 10g discoverer which incudes both Middle Tier & Infrastructural Tier with default port numbers.
    Now i would like to change the default port numbers to other port numbers. This is because , i am got to do another discoverer installation with default port numers.
    In this case i have one question. Is it required to change all the port numbers that exist in the file "portlist.ini" of both Middle Tier & Infrastructural Tier.
    What change still i need to do in Oracle 11i application front.
    Thanks
    Naveen

    I did troubleshooting to verify the group, but this just changed the port number back to the default in the listener.ora & tnsnames.ora.
    So I did all the steps again to change the port number from the default to another - via lsnrctl status, i see that the new port number is being used, I can also log in to the database via Toad using the new port number, in v$parameter i see that the local_listener is registered on the new port number....only under the Fail Safe manager, the port number (under listener parameter) has not changed....it still shows the default port number. Anyone know how to change this???

  • Changing background dump dest , does not work

    Hi,
    i was asked to change the background dump dest location on RAC (10.2.0.5) environment.i just use below command to do this
    alter system set background_dump_dest='/newlocation' scope=both;
    its successfully completed. but now new alerts should be write down on this new location. and all other process files. but still old location is being used for this alert log and other processes files. even i flushed and remove alert log file from old location . but oracle generate alert log automatically in old location. so kinldy can any one tell me how to change this parameter so that it start dumping alert logs in new location ???
    regards,

    I would have tried to change in all instances with:
    alter system set background_dump_dest=<new location> sid='*';But on my 10.2.0.1 RAC test this only works for trace files: to have alert log in new location I need to restart each instance like for rolling patching.

  • ALTER SYSTEM DUMP DATAFILE in 10g

    Hi all,
    When I refered to the the documentation "Oracle Database SQL Reference 10g Release 2 (10.2)"
    to obtain information about the command "ALTER SYSTEM DUMP DATAFILE", I did not see it there.
    Is it obsolete?
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_2013.htm#i2053602
    Actually, when I tested it, I did not see any trace file generated.
    Any guideline is appreciated.

    Which OS are you on?
    However, ALTER SYSTEM DUMP DATAFILE commands works with 10g, no issues.
    Do the following:
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'DUMPFILE';
    ALTER SYSDATE DUMP DATAFILE 'FILE.DBF' BLOCK 1;
    and now look into your dump you will have file name with prefix as DUMPFILE.
    Jaffar

  • How to change the application icon in 10g

    Dear all,
    how to change the application icon in forms 10g. i mean the icon which is located on the window it self.
    Best Regards,
    ShoooSh

    Hello,
    <p>Maybe you could find a solution in this article</p>
    Francois

  • Importing 9i dump file into 10g database

    Hello guys,
    i know that 9i import is incompatible with 10g datapump.I want to know how i can move the data from my production database(9i) to the new site(10g) without much hassle.
    I don't want to use upgrade because not all objects in 9i is needed in the new version of the application.I just want to export what i need from 9i to a dump file and then reimport it into 10g.
    Has imp/exp being deprecated in 10g??
    And i know i will get invalid objects,i hope recompilation would solve that.I will start the test next week.Just wanted to confirm.
    Thank you
    Charles

    I don't think Export/Import is deprecated in Oracle 10g but Datapump is mostly used in 10g.
    But if in 10g Exp/Imp is deprecated - Still you can have a Oracle 9i Client installation with Exp/Imp and then use imp and connect to a 10g database
    Thanks
    Shasik

  • To get changes from Dumps

    Hi,
    Task:
    1) We get daily a full Oracle Dump (two schemas) with appr. 5.000.000 Rows (we cannot change / customize / ... the original data base)
    2) We have to track the date changes for certain tables with information like date, table name, certain columns, kind of change (insert / update, delete) - > into an Oracle Express Edition DB
    3) Approach to solving the problem
    a) Import the whole dump (2 schemas)
    b) Copy the original 2 schemas in 2 copy schemas ( how ?)
    c) Delete content of original schemas ( how ? )
    d) Import the whole dump (2 original schemas)
    e) Compare content (only certain tables) of original / copy schemas
    .....i) Build "-" (identify DELETE / INSERT)
    .....ii) Build intersect with compare ( how ? )
    .....iii) insert rows wit date in 2) in approriate tables
    f) delete content of 2 Copy schemas
    g) "on the next day" repeat from b)
    I would like to have some suggestions regarding 3b, 3c and 3eii:
    A) Easy way
    B) performance
    Thanks a lot in advance,
    Michel
    Edited by: Michel77 on Jun 22, 2012 6:24 PM

    Let me take a stab at this ...
    1. every day import schema as schema1_yyyymmdd and schema2_yyyymmdd
    2. Write a procedure that will take two dates as parameters and then compare diff using [dbms_comparison|http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_comparison.htm#CHDHEFFJ].
    Examples can be found Here here here and other places. This might be helpful for you to compare table definition/data.
    [dbms_metadata_diff|http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_metadiff.htm#CHEGCGCB] can be used to compare non-data stuff.
    It seems like hard work, but this will be one time setup .. then you just have to call procedure with different dates. Of course I am assuming you are on Oracle 11g, if not, you can still write code to compare data. check Asktom website there are some good reads on comparing data between two tables/databases etc.
    hope this helps some.

Maybe you are looking for