Very Urgent, sqlnet.log, nmi.log file sizes

Hi all,
Please here my dabatabase server c drive almost full, the machine became very slow, i want to reduce any files thus giving some free space, i found sqlnet.log, nmi.log files are aeach 600 MB, my server database is 24 * 7, OLTP.
is there nay chance i can delete this? or replace with with two empty file with same names overwriting them and restart the server?
what are the problems if i do so? anybody tell whjat i should do?
THnks in advance

Hi,
You can check the path of the system tablespace files from the DBA_DATA_FILES view. But to change that from C: to D: you will need downtime. This activity can only be done at the mount stage.
First identify which tablespace is being hit the most. Add a datafile to that tablespace in the D: so that the load can be balanced between C: and D:.
If SYSTEM is one of those tablespaces then add a datafile in D: for SYSTEM tablespace like this
ALTER TABLESPACE SYSTEM
ADD DATAFILE 'D:\.....' SIZE 100M AUTOEXTEND ON MAXSIZE 2048M;
You do not require a restart for this; It can be done online.
Before you could do the above activity I suggest you try to free up some space on C: inself. Check if the alert log is on C: and has grown in MBs. Check if the archive logs are on C: if your database is in ARCHIVELOG mode. Delete any dump file that you might have created in the past and is not of any use. Clear the Event Viewer in Windows. Empy the Recycle bin. etc.
In a nutshell, MAKE SOME PLACE TO LIVE!
Regards.

Similar Messages

  • Fatal NI connect error 12203 resulting huge increase in sqlnet.log file

    Hi,
    I am getting the following error message in SQLNET.LOG file on the client machine. This is my upload program which takes few hours to complete and during the program run, the size of SQLNET.LOG file keeps on increasing and goes to 100's of MB and it contains only this error repeatedly.
    But my program gets connected to Database and does the upload. But the size of SQLNET.LOG file grows like anything. Pls let me know what's going wrong.
    ERROR in SQLNET.LOG File -
    Fatal NI connect error 12203, connecting to:
    (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle80)(ARGV0=oracle80ORCL)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')))(CONNECT_DATA=(SID=ORCL)(CID=(PROGRAM=C:\ORAWIN95\BIN\IFRUN60.EXE)(HOST=IT_DBA)(USER=IT))))
    VERSION INFORMATION:
         TNS for 32-bit Windows: Version 8.0.5.0.0 - Production
         Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 8.0.4.0.0 - Production
         Windows NT TCP/IP NT Protocol Adapter for 32-bit Windows: Version 8.0.5.0.0 - Production
    Time: 06-MAR-03 12:37:08
    Tracing not turned on.
    Tns error struct:
    nr err code: 12203
    TNS-12203: TNS:unable to connect to destination
    ns main err code: 12560
    TNS-12560: TNS:protocol adapter error
    ns secondary err code: 0
    nt main err code: 102
    TNS-00102: Keyword-Value binding operation error
    nt secondary err code: 0
    nt OS err code: 0
    Regards,
    Mitesh V.

    Hi,
    Actually I thought this error is appearing in only one of the machines, but when I am figuring out, I found the same error in almost all the client machines.
    Though the programs are running fine getting the database connectivity, I am not able to find why it is giving error showing connection through BEQ protocol.
    Pls someone tell me why this is happening and how I will find out which protocol it is using.
    Regards,
    Mitesh Vijayvargiy

  • On-line redu log file size reduce

    Dear Experts,
    Recently i have done HADR set-up, my DR server is on remote location,
    and my network line is not very fast , my query is , can we reduce on log file size, which is currently 63.9921874995806 MB(by default) . because if we will reduce the size it may help to ship the log fast,
    Kindly suggest the best,
    Thanks
    Sadiq

    Hello,
    if you are referring to the build-in DB2 HADR functionality, reducing the size of log files will not help.
    HADR does not transfer complete log files but will replicate logging information of each single transaction constantly to the standby site.
    Your network has to have enough bandwidth to support the average log generation rate. This is not related to the size of individual log files, but to how much logging information is generated per amount of time.
    Kindly check the corresponding DB2 online documentation for HADR performance aspects
    [High availability disaster recovery (HADR) performance|http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=%2Fcom.ibm.db2.luw.admin.ha.doc%2Fdoc%2Fc0021056.html]
    But, to answer your initial  question: The size of log files can be changed by modifying the LOGFILSIZ database configuration parameter. Probably it will not help in your case.
    Edited by: Hans-Juergen Zeltwanger on Feb 20, 2012 2:49 PM
    Edited by: Hans-Juergen Zeltwanger on Feb 20, 2012 2:50 PM

  • MessageBox log file size

    Hi, 
    In our prod environment, the MessageBox data file is withing the recommended limits - 2GB, but the log file is 32GB. Is this a reason to worry, or it is normal?  I couldn't find any recommendations on this. 
    Thank you very much!

    This is not normal.
    IMO your BizTalk database Jobs are not running , Make sure your BizTalk SQL servers jobs have been enabled and SQL server agent is running. 
    Please have a look of
    How to Configure the Backup BizTalk Server Job article to enable the jobs. 
    The BizTalk backup job is responsible for keeping the log file size in the limit. 
    you can try shrinking the log file using following SQL command
    USE BiztalkMsgBoxDb;
    GO
    -- Truncate the log by changing the database recovery model to SIMPLE.
    ALTER DATABASE BiztalkMsgBoxDb
    SET RECOVERY SIMPLE;
    GO
    -- Shrink the truncated log file to 1 MB.
    DBCC SHRINKFILE (BiztalkMsgBoxDb_Log, 2);
    GO
    I would recommend you to have a read of following articles
    BizTalk Environment Maintenance from a DBA perspective 
    BizTalk Databases: Survival Guide
    hope this helps. 
    Greetings,HTH
    Naushad Alam
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or
    Mark As Answer
    alamnaushad.wordpress.com

  • Enterprise manager log file sizes

    Hi,
    I was wondering if there is a way of managing the size of the emdb.nohup file in Enterprise Manager. Looking at the documentation, it looks as though you can control the emoms trace and log file sizes, but I can't find anything about the nohup log.
    Ideally I would like to be able to purge the log file.
    Thanks very much!

    Hi again,
    I found the emdb.nohup file in my log directory at the location you noted. It is apparently created when I stopped and restarted my dbconsole (emctl start dbconsole) and is updated each time I make a connection to that database (for which you are connecting), and for every time the page refreshes.
    I think the name of the file using a suffix of nohup is probably intentional on Oracle's part to indicate that this is a log file that is 'active', but in reality, it is not a true nohup file as in the sense of using the unix nohup command. (at least that is what I'm thinking)
    Sorry I'm not knowledgeable enough on this to be sure of my theory, but that is what I basically theorize.
    According to man pages on nohup, it states nohup is "a utility immune to hangups".
    "nohup - run a command immune to hangups, with output to a non-tty"
    To answer your question, there is no problem purging or pruning this file.
    I just cleared it out by sending a date to the file which effectively clears it out except for a new entry in the file with the current date/timestamp.
    e.g., $ date > emdb.nohup
    Then, I reconnected to my OEM console for this database and it updated the file with new entries for the new connection. No problem....
    Wed Aug 6 09:46:53 EDT 2008
    08/08/06 09:47:07 ## oracle.sysman.db.adm.inst.SitemapController: event="doLoad"
    08/08/06 09:47:07 ## 1. newPage = /database/instance/sitemap/sitemap
    08/08/06 09:47:07 ## 2. newPage = /database/instance/sitemap/sitemap
    Ji Li

  • Log Configurator - Increase log file size from 10 mb to 20 mb

    Hi All,
    We have implemented custom logging in our implementation using custom Log Destinations and Locations.
    The log destination (inside log configurator service) we were using earlier had size of 10 mb for each file and the file count was 5.
    Now, as the log files are getting archived very soon we changed the log file size to 20 mb and file count as 5.
    After restarting server, the log viewer in NWA and within Visual Admin does not show updated logs. We have monitored this for sometime now so that new logs are written to the log files still the situation is same.
    Strangely, logs are getting updated at OS level in the log files but the entries are nto shown on Log Viewer in NWA or Visual Admin.
    Are there any restrictions on the log file size or any other parameter needs to be changed to make this work ?
    Looking forward for your inputs and suggestions.
    Regards,
    Prasanna

    Hi All,
    We have implemented custom logging in our implementation using custom Log Destinations and Locations.
    The log destination (inside log configurator service) we were using earlier had size of 10 mb for each file and the file count was 5.
    Now, as the log files are getting archived very soon we changed the log file size to 20 mb and file count as 5.
    After restarting server, the log viewer in NWA and within Visual Admin does not show updated logs. We have monitored this for sometime now so that new logs are written to the log files still the situation is same.
    Strangely, logs are getting updated at OS level in the log files but the entries are nto shown on Log Viewer in NWA or Visual Admin.
    Are there any restrictions on the log file size or any other parameter needs to be changed to make this work ?
    Looking forward for your inputs and suggestions.
    Regards,
    Prasanna

  • Reduce the Production Log file size(.LDF)

    Hi Everybody,
                We are using R/3 ECC 6.0 VERSION with SQL 2005 Database. For the past two days our Production Server Performance is very slow due to the size of Production Log file(.LDF) it crossed 17 GB. i want to reduce this Log file size. i dont know how. plz some one help me to do this job.otherwise this will become Serious Issue.
    Points will be rewarded
    Thanks
    Siva

    How did you trace the slowness back to the log file?  A 17 GB log file is on the small side for a Production system.  I don't think a hotfix is going to fix your log growth.
    Is the log on the same physical disk as your data files?  Is it on a very slow hard drive or is the drive having an I/O problem?  That is the only way it would impact performance to a noticable degree.  A large or small log file will have no real effect on performance since it is just appended to and not read during writes, and in most production environments it is on a seperate disk or part of a SAN.
    You can decrease its growth by increasing your log backup time.  Do you back it up now?  You can probably set your backup software to shrink the file when it finished backing up.  You should consult your DBA team and ask for their advice, they can quickly point you in the right direction.

  • How to set up PopProxy* log file size ?

    Dear All,
    Does anybody know how to set up MMP PopProxy* log file size and rollovertime ?
    ./imsimta version
    Sun Java(tm) System Messaging Server 7.0-3.01 64bit (built Dec 9 2008)
    libimta.so 7.0-3.01 64bit (built 09:24:13, Dec 9 2008)
    Steve

    SteveHibox wrote:
    Does anybody know how to set up MMP PopProxy* log file size and rollovertime ?Details on these settings are available here:
    http://wikis.sun.com/display/CommSuite6U1/Communications+Suite+6+Update+1+What%27s+New#CommunicationsSuite6Update1What%27sNew-MMPLogging
    Regards,
    Shane.

  • Get Total DB size , Total DB free space , Total Data & Log File Sizes and Total Data & Log File free Sizes from a list of server

    how to get SQL server Total DB size , Total DB free space , Total Data  & Log File Sizes and Total Data  & Log File free Sizes from a list of server 

    Hi Shivanq,
    To get a list of databases, their sizes and the space available in each on the local SQL instance.
    dir SQLSERVER:\SQL\localhost\default\databases | Select Name, Size, SpaceAvailable | ft -auto
    This article is also helpful for you to get DB and Log File size information:
    Checking Database Space With PowerShell
    I hope this helps.

  • SQL LOG FILE SIZE INCREASING

    Hi DBA's
    SQL Log file size occupies more disk space on the server, the overall database size is 8GB
    How to decrease the SQL LDF file size on the server, please explain the suitable steps to perform
    Thanks
    DBA

    use master
    go
    dump transaction <YourDBName>
    with no_log
    go
    use <YourDBName>
    go
    DBCC SHRINKFILE (<YourDBNameLogFileName>,
    100) -- where 100 is the size you may want to shrink it to in MB, change it to your needs
    go
    -- then you can call to check that all went fine
    dbcc checkdb(<YourDBName>)
    Andy ,
    what point in asking user to use No_log and you did not even motioned about what this eveil command will do. Actually its
    seriously not required reason being initial size of log file set to 8 G.
    Plus what is point in running checkdb ?
    I don't agree to any part you pointed
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Log file size

    We have a DNS Server running on solaris 9, it's generating huge logs hence /var/adm/messages file size is vey big. Is there any way to create seperate log file for everyday or can I restrict the log file size for a single file.
    Thank you

    Hmmm,
    For what type environment is this DNS server used for? How many domains/delegated domains are configured on the host?
    I think by default BIND allows 1000 recursive lookup connections. (That is already plenty and if you have that amount of legitimate traffic you will have to add more DNS servers and configure the nodes accordingly)
    Is the server listed as a Name Server for your domain and used externally for name resolution for your domain host entries, maybe the SOA?
    nslookup (enter)
    set type=ns (enter)
    you_domain_mane (i.e. your_domain.com) (enter)Or
    dig �q NS your_domain.com
    If the affected server returns in the list it is NEVER EVER a good idea to allow recursive lookups.
    My guess is that you are subject to denial of service, unless you host a fairly large environment with 1000s of hosts.
    Change the recursive-cient connection back (you system cannot handle 5000 recursive lookups and your system utilization shows this.)
    Then configure
    �category queries { your_query_file; };� In your namd.conf
    restart BIND
    Use �rndc� to change the trace level to 1
    Let it run for 2 -5 min and stop BIND entirely
    Then run something like:
    �cat your_query_file | cut -d'/' -f2 | sort | uniq �c | more� (depends on the log file format, better yet use nwak)
    take a quick look to see if there is one IP that is hammering your system.

  • SQL log file size is extending rapidly

    Hello All,
    We are using ECC 6.0, our database is SQL 2005 & operating system is Windows NT 4x AMD64 L.
    Our database log file size is increasing rapidly, now its size is more than the all 4 data files (near about 300gb).
    Last week I tried to shrink log file but it didn't worked.
    Now less space is remained on disk, please help me.
    Now the system is started giving dump at the time of log in, & the dump is like "START_CALL_SICK ".
    I am attaching dump error text file.
    Please help why is this happening.
    Thanks in advance
    Mahendra

    Hi,
    I have backed up log file & shrink the file but it didn't worked for me
    What is the result? It shrinks the log and release all the space (for all committed transactions).
    How can i add another log file?
    Can i delete old log file after adding new log file.
    You can add another log file by following below steps. but in your case, this is not the right solution because you have good amount of log file configuration for your database (now its size is more than the all 4 data files (near about 300gb)).
    Open SQL server management studio > Expand database > Right click on database > Select Files > Click on Add > Give the input parameters (Logical file name, path, initial size etc.) click on OK
    If system is not allowing you to shrink the log file, it means you have active transactions in system which are continuously using your log file.
    Regards,
    Nick Loy

  • Archive Log file size

    I am using Oracle database 9.2.0.1.0, My OS is Linux AS4 Update version.
    My database is in archive log mode, the archive file size generated on disk is 100 MB. I want to monitor the reason that why the size of redo generated is too big.
    Kindly suggest.
    Regards

    Archived log file size will always be the same size as redo log or less than the redo log size (but never bigger than redo log size)
    ARCHIVE_LAG_TARGET is the reason (apart from manual archiving ALTER SYSTEM ARCHIVE LOG CURRENT/ALL) why you see archived logs with lesser size than redo log
    why  archive log file size constanly change?

  • Archive log file size is varying in RAC 10g database.

    ---- Environment oracle 10g rac 9 node cluster database, with 3 log groups for each node with 500 mb size for each redo log file.
    Question is why would be the archive log file size is varying, i know when ever there is log file switch the redo log will be archived, So as our redo log file size is of 500 MB
    isn't the archive log file size should be the same as 500 MB?
    Instead we are seeing the archive log file is varying from 20 MB to 500 MB this means the redo log file is not using the entire 500 MB space? What would be causing this to happen? how can we resolve this?
    Some init parameter values.(just for information)
    fast_start_mttr_target ----- 400
    log_checkpoint_timeout ----- 0
    log_checkpoint_interval ----- 0
    fast_start_io_target ----- 0

    There was a similar discussion a few days back,
    log file switch before it filled up
    The guy later claimed it's because their log_buffer size. It's remain a mystery to me still.

  • Very high transaction log file growth

    Hello
    Running Exchange 2010 sp2 in a two node DAG configuration.  Just recently i have noticed a very high transaction log file growth for one database. The transaction logs are growing so quickly that i have had to turn on circular logging in order to prevent
    the log lun from filling up and causing the database to dismount. I have tried several things to try find out what is causing this issue. At first i thought this could be happening because of virus, an Active Sync user, a users outlook client, or our salesforce
    integration, howerver when i used exmon, i could not see any unusual high user activity, also when i looked at the itemcount for all mailboxes in the particular database that is experiencing the high transaction log file growth, i could not see any mailboxes
    with unusual high item count, below is the command i ran to determine this, i ran this command sever times. I also looked at the message tracking log files, and again could see no indication of a message loop or unusual high message traffic for a
    particlar day. I also followed this guide hopping that it would allow me to see inside the transaction log files, but it didnt produce anything that would help me understand the cause of this issue. When i ran the below tool againts the transaction log files,
    i saw DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD, or OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO, or HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.
    I am starting to run out of ideas on how to figure out what is causing the log file build up. Any help is greatly appreciated.
    http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
    Get-Mailbox -database databasethatkeepsgrowing | Get-MailboxStatistics | Sort-Object ItemCount -descending |Select-Object DisplayName,ItemCount,@{name="MailboxSize";exp={$_.totalitemsize}} -first 10 | Convertto-Html | out-File c:\temp\report.htm
    Bulls on Parade

    If you have users with iPhones or Smart Phones using ActiveSync then one of the quickest ways to see if this is the issue is to have users shot those phones off to see if the problem is resolved.  If it is one or more iPhones then perhaps look at
    what IOS they are on and get them to update to the latest version or adjust the ActiveSync connection timeout.  NOTE: There was an issue where iPhones caused runaway transactions logs and I believe it was resolved with IOS 4.0.1
    There was also a problem with the MS CRM client awhile back so if you are using that check out this link.
    http://social.microsoft.com/Forums/en/crm/thread/6fba6c7f-c514-4e4e-8a2d-7e754b647014
    I would also deploy some tracking methods to see if you can hone in on the culprits, i.e. If you want to see if the problem is coming from an internal Device/Machine you can use one of the following
    MS USER MONITOR:
    http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en and here is a link on how to use it
    http://www.msexchange.org/tutorials/Microsoft-Exchange-Server-User-Monitor.html
    And this is a great article as well
    http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
    Also check out ExMon since you can use it to confirm which mailbox is unusually active , and then take the appropriate action.
     http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en
    Troy Werelius
    www.Lucid8.com
    Search, Recover, & Extract Mailboxes, Folders, & Email Items from Offline EDB's and Live Exchange Servers with Lucid8's DigiScope

Maybe you are looking for