Change in Oracle Parameters and Log file size

Hello All,
We have scheduled DB Check job and the log file showed few errors and warnings in the oracle parameter that needs to be corrected. We have also gone through the SAP Note #830576 – Oracle Parameter Configuration to change these parameters accordingly. However we need few clarifications on the same.
1.Can we change these parameters directly in init<SID>.ora file or only in SP file. If yes can we edit the same and change it or do we need to change it using BR tools.
2.We have tried to change few parameters using DB26 tcode. But it prompts for maintaining the connection variables in DBCO tcode. We try to make change only in default database but it prompts for connection variables.
Also we get check point error. As per note 309526 can we create the new log file with 100MB size and drop the existing one. Or are there any other considerations that we need to follow for the size of log file and creating new log file. Kindly advise on this. Our Environment is as follows.
OS: Windows 2003 Server
DB: Oracle 10g
regards,
Madhu

Hi,
Madhu, We can change oracle parameters at both the levels that is init<SID> as well as SPFILE level.
1. If you do the changes at init<SID> level then you have to generate the SPFILE  again and then database has to be restarted for the parameters to take effect.
    If you make the changes in SPFILE then the parameters will take effect depending on the parameter type whether it is dynamic or static. You also need to generate the PFILE i.e init<SID>.ora
2. If possible do not change the oracle parameters using the tcode. I would say it would be better if you do it via the database and it would be much easier.
3. Well its always good to have a larger redo log size. But only one thing to keep in mind is that once you change the size of the redolog the size of the archive log also changes although the number of files will decrease.
Apart from that there wont be any issues.
Regards,
Suhas

Similar Messages

  • Can we store ORACLE DATA and LOG files in seperate file system

    I am installing NW2004s in HP UNIX Server with Oracle database
    I have selected the option as custom
    I would want the SAP datafiles to be stored in /sapdata  file system
    Log files to be stored in /oraclelog file system
    While installation SAP is just asking me for the location of SAP data files. It is not asking me for the location of the log files.
    (It is asking me for the location of control files)
    Hence as per my understanding the oracle log files can be saved only in  the same directory as oracle Data file.
    Please confirm

    I have heard reports of people that made some kind of synchronization between AD and OID, but I have no hands on experience with this. There are some notes on metalink that describe the process, but I try to stay away from AD as far as possible (I usually work on unix or linux environments and those don't really mix with AD).

  • Get Total DB size , Total DB free space , Total Data & Log File Sizes and Total Data & Log File free Sizes from a list of server

    how to get SQL server Total DB size , Total DB free space , Total Data  & Log File Sizes and Total Data  & Log File free Sizes from a list of server 

    Hi Shivanq,
    To get a list of databases, their sizes and the space available in each on the local SQL instance.
    dir SQLSERVER:\SQL\localhost\default\databases | Select Name, Size, SpaceAvailable | ft -auto
    This article is also helpful for you to get DB and Log File size information:
    Checking Database Space With PowerShell
    I hope this helps.

  • Enterprise manager log file sizes

    Hi,
    I was wondering if there is a way of managing the size of the emdb.nohup file in Enterprise Manager. Looking at the documentation, it looks as though you can control the emoms trace and log file sizes, but I can't find anything about the nohup log.
    Ideally I would like to be able to purge the log file.
    Thanks very much!

    Hi again,
    I found the emdb.nohup file in my log directory at the location you noted. It is apparently created when I stopped and restarted my dbconsole (emctl start dbconsole) and is updated each time I make a connection to that database (for which you are connecting), and for every time the page refreshes.
    I think the name of the file using a suffix of nohup is probably intentional on Oracle's part to indicate that this is a log file that is 'active', but in reality, it is not a true nohup file as in the sense of using the unix nohup command. (at least that is what I'm thinking)
    Sorry I'm not knowledgeable enough on this to be sure of my theory, but that is what I basically theorize.
    According to man pages on nohup, it states nohup is "a utility immune to hangups".
    "nohup - run a command immune to hangups, with output to a non-tty"
    To answer your question, there is no problem purging or pruning this file.
    I just cleared it out by sending a date to the file which effectively clears it out except for a new entry in the file with the current date/timestamp.
    e.g., $ date > emdb.nohup
    Then, I reconnected to my OEM console for this database and it updated the file with new entries for the new connection. No problem....
    Wed Aug 6 09:46:53 EDT 2008
    08/08/06 09:47:07 ## oracle.sysman.db.adm.inst.SitemapController: event="doLoad"
    08/08/06 09:47:07 ## 1. newPage = /database/instance/sitemap/sitemap
    08/08/06 09:47:07 ## 2. newPage = /database/instance/sitemap/sitemap
    Ji Li

  • Do we need to format data and log files with 64k cluster size for sql server 2012?

    Do we need to format data and log files with 64k cluster size for sql server 2012?
    Does this best practice still applies to sql server 2012 & 2014?

    Yes.  The extent size of SQL Server data files, and the max log block size have not changed with the new versions, so the guidance should remain the same.
    Microsoft SQL Server Storage Engine PM

  • When I move a RAW file from IPhoto to my desktop or Photoshop it changes to a jpeg and reduces in size. How can I get the Raw file across?

    When I move a RAW file from IPhoto on my macbook pro to desktop or Photoshop it changes to a jpeg and reduces in size. How can I get the Raw file to move across?

    I create separate folders based on the year and then the actual date of when I take images. You can make those folders anywhere on any hard drive that is connected to your Mac whether internal or external. I also use the Photoshop Photo Downloader that is included with Photoshop/Bridge and it will create the date folder so all I do is create a Year folder.
    Open Bridge or click on the Bridge icon in PS and in the File menu item in Bridge select "Get photos from Camera". It can be a camera connected to your Mac or a memory card from a camera. A window will open and you then select the camera or memory card. Set the location they will be downloaded to, just the folder and you can Browse to a folder that you created, then in the "Create  Subfolders drop down select what date stamp you want to use or or custom name or not to create subfolders at all.
    I've never cared for iPhoto one bit. I tried it but found it way to restrictive. It likes to have full control over how you interact with your images.

  • Change the Data and Log file locations in livecache

    Hi
    We have installed livecache in unix systems in the /sapdb mount directory where the installer have created sapdata and sapdblog directories. But the unix team has already created two mount direcotries as follows:
    /sapdb/LC1/lvcdata and /sapdb/LC1/lvclog mount points.
    While installing livecache we had selected this locations for creating the DATA and LOG volumes. Now they are asking to move the DATA and LOG volumes created in sapdata and saplog directories to these mount points. How to move the data and log file and make the database consistent. Is there any procedure to move the files to the mount point directories and change the pointers of livecahce to these locations.
    regards
    bala

    Hi Lars
    Thanks for the link. I will try it and let u know.
    But this is livecache (even it uses MaxDB ) database which was created by
    sapinst and morover is there any thing to be adjusted in SCM and as well as
    any modification ot be done in db level.
    regards
    bala

  • Shell Script to grep Job File name and Log File name from crontab -l

    Hello,
    I am new to shell scripting. I need to write a shell script where i can grep the name of file ie. .sh file and log file from crontab -l.
    #51 18 * * * /home/oracle/refresh/refresh_ug634.sh > /home/oracle/refresh/refresh_ug634.sh.log 2>&1
    #40 17 * * * /home/oracle/refresh/refresh_ux634.sh > /home/oracle/refresh/refresh_ux634.sh.log 2>&1
    In crontab -l, there are many jobs, i need to grep job name like 'refresh_ug634.sh' and corresponding log name like 'refresh_ug634.sh.log '.
    I am thinking of making a universal script that can grep job name, log name and hostname for one server.
    Then, suppose i modify the refresh_ug634.sh script and call that universal script and echo those values when the script gets executed.
    Please can anyone help.
    All i need to do is have footer in all the scripts running in crontab in one server.
    job file name
    log file name
    hostname
    Please suggest if any better solution. Thanks.

    957704 wrote:
    I need help how to grep that information from crontab -l
    Please can you provide some insight how to grep that shell script name from list of crontab -l jobs
    crontab -l > cron.log -- exporting the contents to a file
    cat cron.log|grep something -- need some commands to grep that infoYou are missing the point. This forum is for discussion of SQL and PL/SQL questions. What does your question have to do with SQL or PL/SQL.
    It's like you just walked into a hardware store and asked where they keep the fresh produce.
    I will point out one thing about your question. You are assuming every entry in the crontab has exactly the same format. consider this crontab:
    #=========================================================================
    # NOTE:  If this is on a clustered environment, all changes to this crontab
    #         must be replicated on all other nodes of the cluster!
    # minute        (0 thru 59)
    # hour          (0 thru 23)
    # day-of-month  (1 thru 31)
    # month         (1 thru 12)
    # weekday       (0 thru 6, sunday thru saturday)
    # command
    #=========================================================================
    00 01 1-2 * 1,3,5,7 /u01/scripts/myscript01  5 orcl  dev
    00 04 * * * /u01/scripts/myscript02 hr 365 >/u01/logs/myscript2.lis
    00 6 * * * /u01/scripts/myscript03  >/u01/logs/myscript3.lisThe variations are endless.
    When you get to an appropriate forum (this on is not it) it will be helpful to explain your business requirement, not just your proposed technical solution.

  • SQL LOG FILE SIZE INCREASING

    Hi DBA's
    SQL Log file size occupies more disk space on the server, the overall database size is 8GB
    How to decrease the SQL LDF file size on the server, please explain the suitable steps to perform
    Thanks
    DBA

    use master
    go
    dump transaction <YourDBName>
    with no_log
    go
    use <YourDBName>
    go
    DBCC SHRINKFILE (<YourDBNameLogFileName>,
    100) -- where 100 is the size you may want to shrink it to in MB, change it to your needs
    go
    -- then you can call to check that all went fine
    dbcc checkdb(<YourDBName>)
    Andy ,
    what point in asking user to use No_log and you did not even motioned about what this eveil command will do. Actually its
    seriously not required reason being initial size of log file set to 8 G.
    Plus what is point in running checkdb ?
    I don't agree to any part you pointed
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Log file size

    We have a DNS Server running on solaris 9, it's generating huge logs hence /var/adm/messages file size is vey big. Is there any way to create seperate log file for everyday or can I restrict the log file size for a single file.
    Thank you

    Hmmm,
    For what type environment is this DNS server used for? How many domains/delegated domains are configured on the host?
    I think by default BIND allows 1000 recursive lookup connections. (That is already plenty and if you have that amount of legitimate traffic you will have to add more DNS servers and configure the nodes accordingly)
    Is the server listed as a Name Server for your domain and used externally for name resolution for your domain host entries, maybe the SOA?
    nslookup (enter)
    set type=ns (enter)
    you_domain_mane (i.e. your_domain.com) (enter)Or
    dig �q NS your_domain.com
    If the affected server returns in the list it is NEVER EVER a good idea to allow recursive lookups.
    My guess is that you are subject to denial of service, unless you host a fairly large environment with 1000s of hosts.
    Change the recursive-cient connection back (you system cannot handle 5000 recursive lookups and your system utilization shows this.)
    Then configure
    �category queries { your_query_file; };� In your namd.conf
    restart BIND
    Use �rndc� to change the trace level to 1
    Let it run for 2 -5 min and stop BIND entirely
    Then run something like:
    �cat your_query_file | cut -d'/' -f2 | sort | uniq �c | more� (depends on the log file format, better yet use nwak)
    take a quick look to see if there is one IP that is hammering your system.

  • SQL log file size is extending rapidly

    Hello All,
    We are using ECC 6.0, our database is SQL 2005 & operating system is Windows NT 4x AMD64 L.
    Our database log file size is increasing rapidly, now its size is more than the all 4 data files (near about 300gb).
    Last week I tried to shrink log file but it didn't worked.
    Now less space is remained on disk, please help me.
    Now the system is started giving dump at the time of log in, & the dump is like "START_CALL_SICK ".
    I am attaching dump error text file.
    Please help why is this happening.
    Thanks in advance
    Mahendra

    Hi,
    I have backed up log file & shrink the file but it didn't worked for me
    What is the result? It shrinks the log and release all the space (for all committed transactions).
    How can i add another log file?
    Can i delete old log file after adding new log file.
    You can add another log file by following below steps. but in your case, this is not the right solution because you have good amount of log file configuration for your database (now its size is more than the all 4 data files (near about 300gb)).
    Open SQL server management studio > Expand database > Right click on database > Select Files > Click on Add > Give the input parameters (Logical file name, path, initial size etc.) click on OK
    If system is not allowing you to shrink the log file, it means you have active transactions in system which are continuously using your log file.
    Regards,
    Nick Loy

  • Archive Log file size

    I am using Oracle database 9.2.0.1.0, My OS is Linux AS4 Update version.
    My database is in archive log mode, the archive file size generated on disk is 100 MB. I want to monitor the reason that why the size of redo generated is too big.
    Kindly suggest.
    Regards

    Archived log file size will always be the same size as redo log or less than the redo log size (but never bigger than redo log size)
    ARCHIVE_LAG_TARGET is the reason (apart from manual archiving ALTER SYSTEM ARCHIVE LOG CURRENT/ALL) why you see archived logs with lesser size than redo log
    why  archive log file size constanly change?

  • On-line redu log file size reduce

    Dear Experts,
    Recently i have done HADR set-up, my DR server is on remote location,
    and my network line is not very fast , my query is , can we reduce on log file size, which is currently 63.9921874995806 MB(by default) . because if we will reduce the size it may help to ship the log fast,
    Kindly suggest the best,
    Thanks
    Sadiq

    Hello,
    if you are referring to the build-in DB2 HADR functionality, reducing the size of log files will not help.
    HADR does not transfer complete log files but will replicate logging information of each single transaction constantly to the standby site.
    Your network has to have enough bandwidth to support the average log generation rate. This is not related to the size of individual log files, but to how much logging information is generated per amount of time.
    Kindly check the corresponding DB2 online documentation for HADR performance aspects
    [High availability disaster recovery (HADR) performance|http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=%2Fcom.ibm.db2.luw.admin.ha.doc%2Fdoc%2Fc0021056.html]
    But, to answer your initial  question: The size of log files can be changed by modifying the LOGFILSIZ database configuration parameter. Probably it will not help in your case.
    Edited by: Hans-Juergen Zeltwanger on Feb 20, 2012 2:49 PM
    Edited by: Hans-Juergen Zeltwanger on Feb 20, 2012 2:50 PM

  • MessageBox log file size

    Hi, 
    In our prod environment, the MessageBox data file is withing the recommended limits - 2GB, but the log file is 32GB. Is this a reason to worry, or it is normal?  I couldn't find any recommendations on this. 
    Thank you very much!

    This is not normal.
    IMO your BizTalk database Jobs are not running , Make sure your BizTalk SQL servers jobs have been enabled and SQL server agent is running. 
    Please have a look of
    How to Configure the Backup BizTalk Server Job article to enable the jobs. 
    The BizTalk backup job is responsible for keeping the log file size in the limit. 
    you can try shrinking the log file using following SQL command
    USE BiztalkMsgBoxDb;
    GO
    -- Truncate the log by changing the database recovery model to SIMPLE.
    ALTER DATABASE BiztalkMsgBoxDb
    SET RECOVERY SIMPLE;
    GO
    -- Shrink the truncated log file to 1 MB.
    DBCC SHRINKFILE (BiztalkMsgBoxDb_Log, 2);
    GO
    I would recommend you to have a read of following articles
    BizTalk Environment Maintenance from a DBA perspective 
    BizTalk Databases: Survival Guide
    hope this helps. 
    Greetings,HTH
    Naushad Alam
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or
    Mark As Answer
    alamnaushad.wordpress.com

  • Log Configurator - Increase log file size from 10 mb to 20 mb

    Hi All,
    We have implemented custom logging in our implementation using custom Log Destinations and Locations.
    The log destination (inside log configurator service) we were using earlier had size of 10 mb for each file and the file count was 5.
    Now, as the log files are getting archived very soon we changed the log file size to 20 mb and file count as 5.
    After restarting server, the log viewer in NWA and within Visual Admin does not show updated logs. We have monitored this for sometime now so that new logs are written to the log files still the situation is same.
    Strangely, logs are getting updated at OS level in the log files but the entries are nto shown on Log Viewer in NWA or Visual Admin.
    Are there any restrictions on the log file size or any other parameter needs to be changed to make this work ?
    Looking forward for your inputs and suggestions.
    Regards,
    Prasanna

    Hi All,
    We have implemented custom logging in our implementation using custom Log Destinations and Locations.
    The log destination (inside log configurator service) we were using earlier had size of 10 mb for each file and the file count was 5.
    Now, as the log files are getting archived very soon we changed the log file size to 20 mb and file count as 5.
    After restarting server, the log viewer in NWA and within Visual Admin does not show updated logs. We have monitored this for sometime now so that new logs are written to the log files still the situation is same.
    Strangely, logs are getting updated at OS level in the log files but the entries are nto shown on Log Viewer in NWA or Visual Admin.
    Are there any restrictions on the log file size or any other parameter needs to be changed to make this work ?
    Looking forward for your inputs and suggestions.
    Regards,
    Prasanna

Maybe you are looking for

  • Photos App keeps crashing, won´t open and I can´t view my photos anymore

    Please HELP! The new Photos App got installed and now everytime I try to open it, it crashes and I don´t know how to solve this! I used to work with a shared library between iPhoto and Aperture but most of the time I used Aperture, but when I assigne

  • Multiple version of Java on one machine

    Is it possible to have multiple versions of Java co-exist on the same XP Pro machine? Here is our scenario: we have apps that use Java 1.5, but we have one critical app that doesn't support that version yet. It has to use the 1.4 version. Is there a

  • IPad, Acrobat Reader, Content (text and pictures) no longer showing, just blank pages

    Hi, I have a user who successfully opened a .pdf attachment from outlook mail and then opened it in Acrobat Reader on the iPad and then edited it and moved it to a created folder. Everything was working fine and the user highlighted text in the .pdf

  • How to solve the following JDBC-DB2 FORMAT problem?

    Here is the error info:   Message processing failed. Cause: com.sap.engine.interfaces.messaging.api.exception.MessagingException: Error processing request in sax parser: No 'action' attribute found in XML document (attribute "action" missing or wrong

  • Bug report about Erase Application from disk

    Hi, Steps to reproduce this problem : - only jdev11tp2 opened in Windows - create a new Application Workspace - from the 'Application Menu' icon, choose 'Erase Application from disk' ==> the Application directory is remove from IDE but not erased fro