I'm a bit confused about standby log files

Hi all,
I'm a bit confused about something and wondering if someone can explain.
I have a Primary database that ships logs to a Logical Standby database.
Everything appears to be working properly. If I check the v$archived_log table in the Primary and compare it to the dba_logstdby_log view in the Logical Standby, I'm seeing that logs are being applied.
On the logical standby, I have the following configured for log_archive_dest_n parameters:
*.log_archive_dest_1='LOCATION=/u01/oracle/archivedlogs/ORADB1
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
*.log_archive_dest_2='LOCATION=/u02/oracle/archivedlogs/ORADB1
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
*.log_archive_dest_3='LOCATION=/u03/oracle/archivedlogs/ORADB1
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
*.log_archive_dest_4='SERVICE=PNX8A_WDC ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PNX8A_WDC'
*.log_archive_dest_5='LOCATION=/u01/oracle/standbylogs/ORADB1
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
*.log_archive_dest_6='LOCATION=/u02/oracle/standbylogs/ORADB1
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
*.log_archive_dest_7='LOCATION=/u03/oracle/standbylogs/ORADB1
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
Here is my confusion now. Before converting from a Physical standby database to a Logical Standby database, I was under the impression that I needed the standby logs (i.e. log_archive_dest_5, 6 and 7 above) because a Physical Standby database would receive the redo from the primary and write it into the standby logs before applying the redo in the standby logs to the Physical standby database.
I've now converted to a Logical Standby database. What's happening is that the standby logs are accumulating in the directory pointed to by log_archive_dest_6 above (/u02/oracle/standbylogs/ORADB1). They do not appear to be getting cleaned up by the database.
In the Logical Standby database I do have STANDBY_FILE_MANAGEMENT parameter set to AUTO. Can anyone explain to me why standby log files would continue to accumulate and how I can get the Logical Standby database to remove them after they are no longer needed on the LSB db?
Thanks in advance.
John S

JSebastian wrote:
I assume you mean in your question, why on the standby database I am using three standby log locations (i.e. log_archive_dest_5, 6, and 7)?
If that is your question, my answer is that I just figured more than one location would be safer but I could be wrong about this. Can you tell me if only one location should be sufficient for the standby logs? The more I think of this, that is probably correct because I assume that Log Transport services will re-request the log from the Primary database if there is some kind of error at the standby location with the standby log. Is this correct?As simple configure as below. Why more multiple destinations for standby?
check notes Step by Step Guide on How to Create Logical Standby [ID 738643.1]
>
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/boston VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_2='SERVICE=chicago LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_3='LOCATION=/arch2/boston/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=boston'
The following table describes the archival processing defined by the initialization parameters shown in Example 4-2.
     When the Boston Database Is Running in the Primary Role      When the Boston Database Is Running in the Logical Standby Role
LOG_ARCHIVE_DEST_1      Directs archival of redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/boston/.      Directs archival of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/boston/.
LOG_ARCHIVE_DEST_2      Directs transmission of redo data to the remote logical standby database chicago.      Is ignored; LOG_ARCHIVE_DEST_2 is valid only when boston is running in the primary role.
LOG_ARCHIVE_DEST_3      Is ignored; LOG_ARCHIVE_DEST_3 is valid only when boston is running in the standby role.      Directs archival of redo data received from the primary database to the local archived redo log files in /arch2/boston/.
>
Source:-
http://docs.oracle.com/cd/B19306_01/server.102/b14239/create_ls.htm

Similar Messages

  • Confused about the log files

    I have written an application that has a Primary and Secondary database. The application creates tens-of-thousands of records in the Primary database, with a 1-to-1 relationship in the Secondary. On subsequent runs it will either update existing Primary records (which should not update the secondary as that element does not change) or it will create new records.
    The application actually works correctly, with the right data, the right updates and the right logical processing. The problem is the log files.
    The input data I am testing with is originally 2Mb as a CSV file and with a fresh database it creates almost 20Mb of data. This is about right for the way it splits the information up and indexes it. If I run the application again with exactly the same data, it should just update all the entries and create nothing new. My understanding is that the updated records will be written to the end of the logs, and the old ones in the earlier logs would be redundant and the cleaner thread would clean them up. I am explicitly cleaning as per the examples. The issue is that each run, the data just doubles in size! Logically it is fine, physically it is taking a ridiculous amount of space. RUnning DbSpace shows that the logs are mostly full (over 90%) where I would expect most to be empty, or sparsely occupied as the new updates are written to new files. cleanLog() does nothing. I am at a total loss!
    Generally the processing I am doing on the primary is looking up the key, if it is there updating the entry, if not creating one. I have been using a cursor to do this, and using the putCurrent() method for existing updates, and put() for new records. I have even tried using Database.delete() and the full put() in place of putCurrent() - but no difference (except it is slower).
    Please help - it is driving me nuts!

    Let me provide a little more context for the questions I was asking. If this doesn't lead us further into understanding your log situation, perhaps we should take this offline. When log cleaning doesn't occur, the basic questions are:
    a. is the application doing anything that prohibits log cleaning? (in your case, no)
    b. has the utilization level fallen to the point where log cleaning should occur? (not on the second run, but it should on following runs)
    c. does the log utilization level match what the application expects? (no, it doesn't match what you expect).
    1) Ran DbDump with and withour -r. I am expecting the
    data to stay consistent. So, after the first run it
    creates the data, and leaves 20mb in place, 3 log
    files near 100% used. After the second run it should
    update the records (which it does from the
    applications point of view) but I now have 40mb
    across 5 log files all near 100% usage.I think that it's accurate to say that both of us are not surprised that the second run (which updates data but does not change the number of records) creates a second 20MB of log, for a total of 40MB. What we do expect though, is that the utilization reported by DbSpace should fall to closer to 50%. Note that since JE's default minimum utilization level is 50%, we don't expect any automatic log cleaning even after the second run.
    Here's the sort of behavior we'd expect from JE if all the basics are taken care of (there are enough log files, there are no open txns, the application stays up long enough for the daemon to run, or the application does batch cleanLog calls itself, etc).
    run 1 - creates 20MB of log file, near 100% utilization, no log cleaning
    run 2 - updates every record, creates another 20MB of log file, utilization falls, maybe to around 60%. No log cleaning yet, because the utilization is still above the 50% threshold.
    run 3 - updates every record, creates another 20MB of log file, utilization falls below 50%, log cleaning starts running, either in the background by the daemon thread, or because the app calls Environment.cleanLog(), without any need to set je.cleaner.forceCleanFiles.
    So the question here is (c) from above -- you're saying that your DbSpace utilization level doesn't match what you believe your application is doing. There are three possible answers -- your application has a bug :-), or with secondaries and whatnot, JE is representing your data in a fashion you didn't expect, or JE's disk space utilization calculation is inaccurate.
    I suggested using DbDump -r as a first sanity check of what data your application holds. It will dump all the valid records in the environment (though not in key order, no -r is slower, but dumps in key order). Keys and data should up on different lines, so the number of lines in the dump files should be twice the number of records in the environment. You've done this already in your application, but this is an independent way of checking. It also makes it easier to see what portion of data is in primary versus secondary databases, because the data is dumped into per-database files. You could also load the data into a new, blank environment to look at it.
    I think asked you about the size of your records because a customer recently reported a JE disk utilization bug, which we are currently working on. It turns out that if your data records are very different in size (in this case, 4 orders of magnitude) and consistently only the larger or the smaller records are made obsolete, the utilization number gets out of whack. It doesn't really sound like your situation, because you're updating all your records, and they don't sound like they're that different in size. But nevertheless, here's a way of looking at what JE thinks your record sizes are. Run this command:
    java -jar je.jar DbPrintLog -h <envhome> -S
    and you'll see some output that talks about different types of log entries, and their sizes. Look at the lines that say LN and LN_TX at the top. These are data records. Do they match the sizes you expect? These lines do include JE's per-record headers. How large that is depends on whether your data is transactional or not. Non-transactional data records have a header of about 35 bytes, whereas transactional data records have 60 bytes added to them. If your data is small, that can be quite a large percentage. This is quite a lot more than for BDB (Core), partly because BDB (Core) doesn't have record level locking, and partly because we store a number of internal fields as 64 bit rather than 16 or 32 bit values.
    The line that's labelled "key/data" shows what portion JE thinks is the application's data. Note that DbPrintLog, unlike DbSpace, doesn't account for obsoleteness, so while you'll see a more detailed picture of what the records look like in the log, you may see more records than you expect.
    A last step we can take is to send you a development version of DbSpace that has a new feature to recalculate the utilization level. It runs more slowly than the vanilla DbSpace, but is a way of double checking the utilization level.
    In my first response, I suggested trying je.cleaner.forceCleanFiles just to make it clear that the cleaner will run, and to see if the problem is really around the question of what the utilization level should be. Setting that property lets the cleaner bypass the utilization trigger. If using it really reduced the size of your logs, it reinforces that your idea of what your application is doing is correct, and casts suspicion on the utilization calculation.
    So in summary, let's try these steps
    - use DbDump and DbPrintLog to double check the amount and size of your application data
    - make a table of runs, that shows the log size in bytes, number of log files, and the utilization level reported by DbSpace
    - run a je.cleaner.forceCleanFiles cleanLog loop on one of the logs that seems to have a high utilization level, and see how much it reduces to, and what the resulting utilization level is
    If it all points to JE, we'll probably take it offline, and ask for your test case.
    Regards,
    Linda

  • Confusing  about achived log file backup

    From a book, I see
    "we can not combine archived redo log files and datafiles into a single backup",
    But, I do have a command
    "backup...........plus archivedlog"
    they seams contradict with each other,
    why is that?

    They do not conflict which each other:
    "we can not combine archived redo log files and datafiles into a single backup", referes to backup pieces. Oracle cannot combine archivelog and for example tablespace backup in a single backup piece.
    the following command, just says rman to perform a backup of tablespace and archivelogs, but as a result it will create at least two backup pieces one for tablespace and the second for archive redo logs.
    RMAN> backup tablespace users plus archivelog delete input skip inaccessible format "C:\%U.bkf";
    Starting backup at 29-JUN-09
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=128 RECID=142 STAMP=690573687
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\0SKIOKQ3_1_1.BKF tag=TAG20090629T004553 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:02:45
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00128_0686744258.001 RECID=142 STAMP=690573687
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00129_0686744258.001 RECID=143 STAMP=690588250
    Finished backup at 29-JUN-09
    Starting backup at 29-JUN-09
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00004 name=C:\APP\MOB\ORADATA\ORCL\USERS01.DBF
    channel ORA_DISK_1: starting piece 1 at 29-JUN-09
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\APP\MOB\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2009_06_29\O1_MF_NNNDF_TAG20090629T004911_54HWVKFO_.BKP tag=TAG20090629T004911 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
    Finished backup at 29-JUN-09
    Starting backup at 29-JUN-09
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=148 RECID=162 STAMP=690770984
    channel ORA_DISK_1: starting piece 1 at 29-JUN-09
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\0UKIOL1B_1_1.BKF tag=TAG20090629T004946 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00148_0686744258.001 RECID=162 STAMP=690770984
    Finished backup at 29-JUN-09
    Starting Control File and SPFILE Autobackup at 29-JUN-09
    piece handle=C:\APP\MOB\PRODUCT\11.1.0\DB_1\DATABASE\C-1213135877-20090629-00 comment=NONE
    Finished Control File and SPFILE Autobackup at 29-JUN-09With kind regards
    Krystian Zieja

  • Confused about standby redo log groups

    hi masters,
    i am little bit confuse about creating redo log group for standby database,as per document number of standby redo group depends on following equation.
    (maximum number of logfiles for each thread + 1) * maximum number of threads
    but i dont know where to fing threads? actually i would like to know about thread in deep.
    how to find current thread?
    thanks and regards
    VD

    is it really possible that we can install standby and primary on same host??
    yes its possible and i have done it many times within the same machine.
    For yours confusion about spfile ,i agree document recommend you to use spfile which is for DG broker handling if you go with DG borker in future only.
    There is no concern spfile using is an integral step for primary and standby database implementation you can go with pfile but good is use spfile.Anyhow you always keep pfile on that basis you created spfile,i said you make an entry within pfile then mount yours standby database with this pfile or you can create spfile from this pfile after adding these parameter within pfile,i said cause you might be adding this parmeter from SQL prompt.
    1. logs are not getting transfered(even i configure listener using net manager)
    2.logs are not getting archived at standby diectory.
    3.'ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION' NEVER COMPLETE ITS RECOVERY
    4. when tried to open database it always note it 'always' said system datafile is not from sufficiently old backup.
    5.i tried 'alter database recover managed standby database camncel' also.Read yours alert log file and paste the latest log here..
    Khurram

  • A bit confused about VFS in AppV 5.0?

    Guys,
    I'm a bit confused about VFS in AppV 5.0. Can any one describe me VFS in 5.0?
    If it is in 4.6,  if we install the app in C:\ [any drive other than Q:\]  then it is called VFS?
    But, how about in 5.0. 
    Thanks in advance.

    Hi VINOD597,
    With AppV 5 we have PVAD (primary virtual application directory) and the VFS. When you install an application to the PVAD everything outside the PVAD will get in the VFS.
    Read the following link for more information:
    http://www.tmurgent.com/TMBlog/?p=1283

  • Standby log files in Oracle Dataguard

    Hi,
    What is the difference between standby log files and online redo log files in a Dataguard environment?
    What is the use of standby log files?
    Thanks,
    Charith.

    You're probably familiar with the Online Redo Logs (ORLs). Transaction changes are written from the Log Buffer to the ORLs by the LGWR process.
    If you are setting up a physical standby, then you will want to create Standby Redo Logs (SRLs) in the standby database. When SRL's are in place, a process called LNS will transport redo from the Log Buffer of the primary to the RFS process on the standby which will write the redo to the SRLs. If the SRL does not exist, RFS can't do this. The biggest benefit of using SRLs is that you will experience much less data loss, even in MAX PERFORMANCE mode. Redo will constantly be shipped. You won't have to wait for ARCH to transport a full archived redo log.
    Cheers,
    Brian

  • Standby log files

    I want to convert our old script based primary /standby database into a dataguard config using the LGWR as log transport.
    I already have old log files on the standby database, but the data in them is from 2004. Not entirely interesting since the database gets recovered from the arch log files every night.
    Point is, can I use these as standby log files, or do I have to (somehow) drop these and re-create new standby logfiles. I cant drop them anyway since when I try, I get "ORA-01624, log 1 needed for crash recovery". (Like h*ll, since the data is older than Noah).
    Will these just get re-written?
    null
    null

    Note:219344.1 This note from metalink gives "Usage, Benefits and Limitations of Standby Redo Logs (SRL)".
    Standby Redo Logs are only supported for the Physical Standby Database in
    Oracle 9i and as well for Logical Standby Databases in 10g. Standby Redo Logs
    are only used if you have the LGWR activated for archival to the Remote Standby
    Database.
    The great Advantage of Standby Redo Logs is that every Entry written into
    the Online RedoLogs of the Primary Database is transfered to the Standby
    Site and written into the Standby Redo Logs at the same time; threfore, you
    reduce the probability of Data Loss on the Standby Database.

  • Bit Confused About Leopard Firewall

    Hey ya'll!
    I'm a little confused about what's going on with the Leopard firewall. It seemed that before, you could choose an application, and which ports you wanted to associate with it, via the System Preferences > Sharing > Firewall tab. Now, they went and moved it, and you can only choose the app, and whether it can receive incoming connections. OK, fine. So let's see what ports are open:
    Thee-MacBook:~ rick$ sudo ipfw list
    Password:
    33300 deny icmp from any to me in icmptypes 8
    65535 allow ip from any to any
    Huh? How come I'm only seeing two rules here?
    My original concern was for SoulseeX, and whether the required range of ports were open. While I can search and download, others have problems downloading from me, and I cannot directly connect to others, and other weirdness. So I decided to start checking things out.
    I do have SoulseeX listed in the Firewall tab, and set to receive incoming connections. But when I used this site <http://closer.s11.xrea.com/etc/port_scan.php> to test port 2234, it returned "failed".
    In short, here's what I'm wondering:
    Is the Firewall tab in System Preferences using ipfw?
    By setting an app in the Firewall tab in System Preferences, is the entire range of ports the app wants, in SoulseeX' case, 2234, 2235, 2236, 2237, 2238, 2239, and 2240, made available?
    How can I see what rules are being used, what ports are open?
    Will writing (a couple of) my own rules to ipfw screw up the other settings in the Firewall tab? I would, if possible, like to keeps things simple, and not have to rewrite all the rules by hand. Besides, I'm not exactly an expert!
    TIA!

    Leopard's application firewall is not a port firewall. I'm not sure where you would be able to see the actual port numbers that an application has opened, but your failures may be due to the ports being stealthed. Theipfw firewall is still there if you want to use it - the new firewall won't overrule it.

  • A little bit confused about my review app

    Hi,
    I'm a little bit confused .
    I have released on store my app for IOS 5.1 several mouths ago. Now to check that my app is IOS 6 compatible have installed xcode 4.5.
    I thought that my correct tests were :
    - build for IOS 6.0 to check if it run on for device that upgrade to IOS 6.0
    - build for IOS 6.0 to check if it run on device that NOT upgrade to IOS 6.0 (still 5.1)
    but i cannot run for simulator 5.1 with xcode 4.5!!!
    how i can check it!!
    Maybe i'm wrong . Could someone help me for my doubt ?
    Best Regards
    Al

    Hardware > Version > 5.1
    Don't forget to test it on real hardware. There should be plenty of iOS 5.1-limited devices available real soon now. Keep one of those for testing.

  • I'm a bit confused about the new Mail, Contacts prefs

    Is anyone else a bit confused with all the new sharing features in the Mail, Contacts & Calendars panel? If I can create address book accounts directly, why would I need to share the "on my Mac" contact list?
    Why is it that I can use the Microsoft Exchange for Google on my iPhone but it won't work in Lion? Has anyone gotten it to work?
    Can anyone share how they have theirs setup and what do they use each one for?

    Yes, you can create a new site, publish to MobileMe and redirect your domain name to it. More info on that can be had here:
    http://discussions.apple.com/thread.jspa?threadID=1164519&tstart=0
    http://docs.info.apple.com/article.html?path=MobileMe/Account/en/acct17114.html
    http://iwebfaq.org/site/iWeb_Domains.html
    FWIW MMe tends to be much slower overall and has no ear to ear support in case you need it. For a personal site it's OK but for a commercial site you would be better off on a commercial hosting server.
    OT

  • Im a bit confused about the Cloud

    I wanna buy Creative Cloud but  im very confused about that name  ...
    I wann use Lightroom and CC for my computer and Laptop - I guess it's working , aren't it ?

    The Cloud is a delivery process where you pay monthly or annual rent to use programs
    -what is in the entire Cloud http://www.adobe.com/creativecloud/catalog/desktop.html
    Special Photography Plan
    http://helpx.adobe.com/photoshop/kb/differences-photoshop-creative-cloud-photography.html
    Cloud Plans https://creative.adobe.com/plans
    -and subscription terms http://www.adobe.com/misc/subscription_terms.html

  • Can not remove standby log file, please help

    Hi,
    My v$logfile
    GROUP# STATUS TYPE MEMBER
    IS_RECOVERY_DEST_FILE
    3 ONLINE /u01/app/oracle/oradata/orcl/redo03.log
    NO
    2 ONLINE /u01/app/oracle/oradata/orcl/redo02.log
    NO
    1 ONLINE /u01/app/oracle/oradata/orcl/redo01.log
    NO
    GROUP# STATUS TYPE MEMBER
    IS_RECOVERY_DEST_FILE
    4 STANDBY /u01/app/oracle/oradata/orcl/stdlog01.log
    NO
    5 STANDBY /u01/app/oracle/oradata/orcl/stdlog02.log
    NO
    And when i clear standby log 5
    SQL> alter database clear logfile group 5;
    alter database clear logfile group 5
    ERROR at line 1:
    ORA-00600: internal error code, arguments: [2130], [0], [8], [2], [], [], [],[]
    Please help me :(

    I hoping you can provide more information. v$log should not return information on standby, V$STANDBY_LOG will.
    Are you preforming the query on the primary or standby side?
    What version of Oracle are you using?
    Why do you need to remove the standby log?
    You should only have to clear a logfile if it has become corrupt, what make you think this is the case?
    If you can provide more details if would be very helpful.
    Best Regards
    mseberg
    Since you have posted the exact same question in the GENERAL DATABASE section and refuse to supply version information there you really have provide more details before anybody will help you.
    Remove standby redo log, get ORA-00600
    Edited by: mseberg on Apr 9, 2011 5:35 AM

  • [solved]A bit confused about openntpd

    I installed openntpd to keep my system clock synchronized back when I installed Arch. I added it to my daemons array as suggested in the wiki (@openntpd) and haven't given it much thought since then. I only recently noticed that my system time was about 5 minutes ahead of the rest of the world (which worked out in a way as all my clocks are set to my computer, which explains why it's been so long since I missed a bus or showed up 5 minutes late for something).
    I've just run "ntpd -sd" and it set the clock right immediately (I'm not worried about log consistency right now), which makes me wonder what the point of having openntpd in the daemons array is if it doesn't seem to do anything. It's been in there for nearly a year and yet my time still shifted. Am I doing something wrong? The only lines in the daemons log related to ntpd are ones showing when it terminated.
    Also, is there a way to run ntpd so it just sets the time and exits instead of hanging around in daemon mode? It seems that it constantly checks with the server pool for minor time shifts.
    [edit]
    I probably should have done this first, but I'm searching the forum right now for related issues. So far I've deleted /var/lib/hwclock/ajdtime as suggested here because the first number was > 4.5. I don't know if that has any affect but somehow I would still expect openntpd to set the correct time (which it did when run in a terminal).
    [/edit]
    Last edited by Xyne (2009-06-12 14:55:32)

    When run in a terminal with "-sd", it remains open and continuously prints out replies, which is what I was referring to (maybe it stops after a while but I haven't tried leaving it open long enough to find out). I wanted to know if it were possible to run it once to set the time and then exit. It seems that ntpdate does this, as arkham replied above. I thought it would be possible to do that with ntpd. I've been running it in daemon mode since I first installed and I know that it doesn't constantly ping different servers (I have the switch where I can see it so I know when the network is being used).
    I don't use network manager but thanks for the reply. At the moment it seems to be enough that I've unbackgrounded openntpd. The only problem with that is that it adds a few extra seconds to the boot process which is annoying (normally it flies through my daemons, now it waits on that).

  • A bit confused about updating

    Hi All,
    Last night I loaded about 550 songs onto my 4gb nano. Today I thought it best to go through the library and uncheck lots of songs that I don't want on the nano...but when I try to update it says there is not enough free space. And when I look into the actual ipod song list it is grayed out and I am unable to deselect songs there. I have the songs encoded ACC at 192 bit rate so at least 700 songs should fit??? So how do I update the ipod and get rid of the songs I don't want...I thought what it would do it do the update and when doing so it would delete the songs I had unchecked....and yes, I checked the box in the preferences that says to update only checked songs.
    Tammy

    It will not automatically delete the songs
    you should turn on Manually manage songs in iTunes Preferences
    and then select the ipod icon and remove all the songs using iTunes
    and then restart from there

  • I'm a bit confused about this 32 bit and 64 bit stuff

    Martin Evening's book says "If your computer hardware is 64 bit enabled and you are running a 64 bit operating system, you can run Lightroom as a 64 bit program."
    Well I'm currently running XP 64 Bit with 4 GB of RAM inside. How do I run Lightroom in 64 bit mode?
    He says Windows 7 users and Windows Vista users will want to buy the 64 bit version of Lightroom. I don't see any 64 bit versions of Lightroom. And for those of us on XP?

    Road Worn wrote:
    I bought the disc. I explored the disc and I did find the Setup32 and Setup64 on there so I will assume the installer installed the correct version for me.
    You can check in task manager, whether LR is running in 32 or 64-bit mode for you. AFAIK, the 32 bit processes should be marked *32 in WinXP also.
    What version of LR are you running? Since you installed from CD, you might have LR3.0 installed. In this case I would strongly suggest you download Version 3.4 from here and install it. There have been many bug fixes from 3.0 to 3.4.
    Beat

Maybe you are looking for

  • Windows 7 bootcamp 3.0 problem with INSTALLING DRIVER

    just successfully installed windows 7 through snow leopard following http://www.simplehelp.net/2009/01/15/using-boot-camp-to-install-windows-7-on-you r-mac-the-complete-walkthrough/ steps. everything went well. but now when i put my snow leopard disk

  • Is ATI Radeon HD 4870 compatible with Final Cut Pro X?

    I heard that some Ati Radeon  aren't compatible with FCP X. Is my ATI Radeon HD 4870 compatible? Thank you

  • Opening a meeting from the popup reminder

    In Netscape Calendar, from the popup reminder there was an [undocumented?] keystroke to open the meeting details. If I remember correctly, this was Ctrl+O. This does not work in Oracle Calendar. Nor do any other logical key strokes that I can think o

  • Can't setup Miglia TVMini

    I just got the Miglia TVMini, and following the setup assistant, I can't get past 'now please connect the unit to your mac via the usb cable supplied' and then nothing happens. The 'next' button never lights up. I have installed the EyeTv 2.1 update.

  • My MacBook Pro won't start- 3bleeps

    Hey. My MacBook is a 15" pro late 2008 non unibody, I haven't replaced the ram but the MacBook will no longer start up, I just get 3 bleeps, I have tried swapping the ram and the screen just stays blank, no backlight, nothing, when I put them back in