Transaction logs not getting cleaned up
I'm using Directory Server v5.2 on Windows 2000 and we encountered a situation over the weekend where our server ran out of disk space (5GB Drive w/ 1.3 DB free at the start of the test). When I looked in the slapd-mydir\db directory, it was filled w/ over 900 log.00000xxx files totalling 1.3GB in size. (About 10K each).
These are the transaction logs I believe and I do not see why they are not getting cleaned up.
Restarting slapd did not fix it. And deleting those files causes the directory server to complain vigorously about missing log files.
Every other server in our lab I checked did not have this problem - only this one. Any thoughts on how this can happen and what one can do about it? We are running w/ default attributes and settings for transaction logging.
We are now having to completely re-install this server.
Hi, I�d like to know how as you solved this problem.
HC
Similar Messages
-
User temp tables not getting cleaned out from tempdb?
Im seeing what looks like temp tables that are hanging around in tempdb and not getting cleaned out.
This is SQL 2008
SELECT * FROM tempdb.sys.objects so WHERE name LIKE('#%') AND DATEDIFF(hour,so.create_date,GETDATE()) > 12
Im seeing about 50 or so objects that get returned and they are user_table and some of them have create dates of several days in the past. Now I know users do not run processes that take that long, so why are these objects still hanging around?
TempDB gets reset after a server restart, but other than that, how long are these things going to hang around?A local temporary table scope exists only for the duration of a user session or the procedure that created the temporary table. When the user logs off or the procedure that created the table completes, the local temporary table is lost.
You can check the below blog for more information
http://sqlblog.com/blogs/paul_white/archive/2010/08/14/viewing-another-session-s-temporary-table.aspx
--Prashanth -
/var/tmp not getting cleaned out...
This might not exactly apply to the Finder, Dock or Dashboard but it is file maintenance, and I don't see a closer match.
I've noticed that my /var/tmp directory has a lot of files that are pretty old. There's 3 directories there that seem to be written to daily...those I wouldn't touch... But there's a whole bunch of files that are named things like: 4568943754a6d... around 56M worth each is ~1.5M
Why are these not getting automatically cleaned up? Is there a limit to this directory that could be causing me issues. I have several but I don't dare mention them here for fear of being told to open a new post
Thanks!Hi again
does the '3 days ago' actually set the time? so '1 week ago' would be valid too?Ah, was afraid you might ask that
'3 days ago' does "set the time" as you say. '1 week ago' might work, but the man pages are not particularly helpful on what works and what doesn't. The find man page refers you to the cvs man page, which says:
However, cvs currently accepts a wide variety of other date formats. They are intentionally not documented here in any detail, and future versions of cvs might not accept all of them.But that gives an example of "1 hour ago" being valid, so I tried a few variations
wacking whole directories scares me... especially since there are live directories in here that I think shouldn't be deleted... can I take this part out?Yes, you can leave out the complete second "find" command if you wish. But applications shouldn't leave files or folders lying around in /tmp or /var/tmp and expect them to still be there when they come back!
excuse my ignorance what language is this?This is the Bourne Shell or more accurately bash. You can try reading the man page, but if you're interested I think you'd find an introductory shell scripting book a lot more readable!
My weekly hasn't run since Nov 18th but the Monthy and daily both say Dec 1st!!!No problem. My weekly hasn't run since Nov 15th. It will eventually get run, after about a week of cpu time (not sleeping).
Now that the periodic scripts are managed by launchd, rather than cron, they do eventually get run. Unless you are running a busy server with hundreds of users, rotating the log files is no big deal with today's typical disk sizes. -
SQL Server transaction log not truncating in simple recovery model mode
The transaction log for our SQL Server database is no longer truncating when it gets to the restricted file growth limit set by the autogrowth settings. Previously - as expected - it would reach this limit and then truncate allowing further entries. Now
it stays full and the application using it shuts down. We are using 2008 R2 and the recovery model is set to simple. Is this a known behaviour / fault which can be resolved ? Thanks.As already suggested check wait type in log_reuse_wait_desc from sys.databases and open transaction on the database.
Also, check long running SPIDs are waiting for from sys.sysprocesses from wait_type column
0 = Nothing - What it sounds like.. Shouldn't be waiting
1 = Checkpoint - Waiting for a checkpoint to occur. This should happen and you should be fine - but there are some cases to look for here for later answers or edits.
2 = Log backup - You are waiting for a log backup to occur. Either you have them scheduled and it will happen soon, or you have the first problem described here and you now know how to fix it
3 = Active backup or restore - A backup or restore operation is running on the database
4 = Active transaction - * There is an active transaction that needs to complete (either way -
ROLLBACK or COMMIT) before the log can be backed up. This is the second reason described in this answer.
5 = Database mirroring Either a mirror is getting behind or under some latency in a high performance mirroring situation or mirroring is paused for some reason
6 = Replication - There can be issues with replication that would cause this - like a log reader agent not running, a database thinking it is marked for replication that no longer is and various other reasons. You can also see this reason and it is
perfectly normal because you are looking at just the right time, just as transactions are being consumed by the log reader
7 = Database snapshot creation You are creating a database snapshot, you'll see this if you look at just the right moment as a snapshot is being created
8 = Log Scan I have yet to encounter an issue with this running along forever. If you look long enough and frequently enough you can see this happen, but it shouldn't be a cause of excessive transaction log growth, that I've seen.
9 = An AlwaysOn Availability Groups secondary replica is applying transaction log records of this database to a corresponding secondary database
Please click the Mark as answer button and vote as helpful if this reply solves your problem -
Wlproxy.log not getting generated
Hi
I am trying to proxy the weblogic requests through IIS 7.5, it is working. but wlproxy logs are not getting generated. Please fnd below my iisproxy.init file contents.
Could any one tell me if I am missing any thing.
Steps followed.
1. created website in IIS
2. added isapi filter by giving the iisforward.dll path, name given is wlforward
3. added handler mappings by giving *.wlforward as the path and giving iisproxy.dll path
4. added the file iisproxy.ini file
iisproxy.ini contents
WebLogicHost=xx.xx.xx.xx
WebLogicPort=7002
WLForwardPath=/
DefaultFileName=/TEST/index.jsp
WLproxySSL=ON
SecureProxy=OFF
RequireSSLHostMatch=false
Debug=ALL
DebugConfigInfo=ON
WLTempDir=D:/Temp
Regards
PPKIt is working. It is something to do with admin rights on that file clreation. I got full admin access. now it works.
-
Custom log not getting generated in weblogic 10.3.5
We have weblogic 10.3.5 as application server and deployed DAM(Digital Assets Management ) application on it. Also we have deployed 2-3 more application which is sharing connection properties file (dfc and log4j) outside from application war in shared location.
Now, we have configured the log path but logs are not get generated where as it comes in out log of weblogic.
Is there any configuration changes which can create the log files as mentioned in our log4j.properties?? Seeking help urgently.How are you writing to the log file in your application?
When, for example, use System.out.println the entry will show up in the .out file.
You need to get the right loghandler to write the logging you configure. -
MV Logs not getting purged in a Logical Standby Database
We are trying to replicate a few tables in a logical standby database to another database. Both the source ( The Logical Standby) and the target database are in Oracle 11g R1.
The materialized views are refreshed using FAST REFRESH.
The Materialized View Logs created on the source ( the Logical Standby Database) are not getting purged when the MV in the target database is refreshed.
We checked the entries in the following Tables: SYS.SNAP$, SYS.SLOG$, SYS.MLOG$
When a materialized view is created on the target database, a record is not inserted into the SYS.SLOG$ table and it seems like that's why the MV Logs are not getting purged.
Why are we using a Logical Standby Database instead of the Primary ? Because, the load on the Primary Database is too much and the machine doesn't have enough resources to support MV based replication. The CPU usage is 95% all the time. The appplication owner won't allow us to go against the Primary database.
Do we have to do anything different in terms of Configuration/Privileges etc. because we are using a Logical Standby Database as a source ?
Thanks in Advance.We have a 11g RAC database in solaris OS where there is huge gap in archive log apply.
Thread Last Sequence Received Last Sequence Applied Difference
1 132581 129916 2665
2 108253 106229 2024
3 107452 104975 2477
The MRP0 process seems not to be working also.Almost 7000+ archives lag in standby if compared with primary database.
i suggest you to go with Incremental rollforward backups to make it SYNC, use this below link for step by step procedure.
http://www.oracle-ckpt.com/rman-incremental-backups-to-roll-forward-a-physical-standby-database-2/
Here questions.
1) Whether those archives are transported & just not applied?
2) Is in production do you have archives or backup of archives?
3) What you have found errors in alert log file?
post
SQL> select severity,message,error_code,timestamp from v$dataguard_status where dest_id=2;
4) What errors in primary database alert log file?
Also post
select ds.dest_id id
, ad.status
, ds.database_mode db_mode
, ad.archiver type
, ds.recovery_mode
, ds.protection_mode
, ds.standby_logfile_count "SRLs"
, ds.standby_logfile_active active
, ds.archived_seq#
from v$archive_dest_status ds
, v$archive_dest ad
where ds.dest_id = ad.dest_id
and ad.status != 'INACTIVE'
order by
ds.dest_id
/Also check errors from standby database. -
Transaction logs not being generated.
Hi Team,
We have configured the log shipping sucessfully on SQl 2000 & ECC 5.0. However it ran fine untill yesterdays night and today we found that logshipping is not happening as expected.
Now ,we found that transactions logs are not being genereated. Couldn't find any errors and logshipping maintainance job seems to be in executing state from quite a long time. We tried stop / start the job manually but no sucess.
Can some one please suggest.
Thanks & Regards,
VinodHi Markus,
Recovery model is set to 'Full' only. Even when we take manual backup of transaction logs.it's not intitaing the process it was just hanging.
Any idea wht could be causing this?
Thanks & Regards,
Vinod -
Processing log not getting updated for send immediatly
Hi,
I have a custom output type and a Custom function module for issueing the output for delivery, my prob is that it processing log is getting updated for the send in batch and other options but it is not getting updated for the SEND IMMEDIATLY, I am using the same FM for the Output issue I am checking if they have selected output type as 4 if yes, I am calling a FM in perform by saying
PERFORM xxxxxxxx ON COMMIT
and in the perform I am calling a FM
CALL FUNCTION xxxxxxxxx in BACKGROUND TASK AS A SEPARATE TASK,
the functioanlity of issueing the output is working but the Processing log is not getting updated.
What might be the problem ? it is getting updated for Send in batch but not for except 4.I am updating Log using this form
at every step I am collecting the messages and passing them to this form.
FORM update_nast_log USING p_msgid
p_msgnr
p_msgty
p_msgv1
p_msgv2
p_msgv3
p_msgv4.
* Local data declaration
DATA: lv_msgid LIKE sy-msgid,
lv_msgnr LIKE sy-msgno,
lv_msgty LIKE sy-msgty,
lv_msgv1 LIKE sy-msgv1,
lv_msgv2 LIKE sy-msgv2,
lv_msgv3 LIKE sy-msgv3,
lv_msgv4 LIKE sy-msgv4.
* clear all the local variables
CLEAR:lv_msgid,
lv_msgnr,
lv_msgty,
lv_msgv1,
lv_msgv2,
lv_msgv3,
lv_msgv4.
* move message ID,number,msg type and the messsages
* to local variables
MOVE: p_msgid TO lv_msgid,
p_msgnr TO lv_msgnr,
p_msgty TO lv_msgty,
p_msgv1 TO lv_msgv1,
p_msgv2 TO lv_msgv2,
p_msgv3 TO lv_msgv3,
p_msgv4 TO lv_msgv4.
* Update nast table using nast protocol update
CALL FUNCTION 'NAST_PROTOCOL_UPDATE'
EXPORTING
msg_arbgb = lv_msgid
msg_nr = lv_msgnr
msg_ty = lv_msgty
msg_v1 = lv_msgv1
msg_v2 = lv_msgv2
msg_v3 = lv_msgv3
msg_v4 = lv_msgv4.
ENDFORM. " UPDATE_NAST_LOG -
OSD Logs not getting written to SLShare
We have the SLShare defined in the CustomSettings.ini as..
SLShare=\\<servername>\Logs$
SLShareDynamicLogging=\\<servername>\OSDLogs$
Logs used to get written to both locations with no issue. Over the past couple of months, no logs get written to either location. Shares have been recreated and still nothing.. Checked the permissions, nothing has changed.
Ideas?I do apologize for not getting a lot more info
Been a bad week with other issues and trying to clear up some old questions and concerns - this being one of them....
During the WinPE phase of the deployment, we’ve connected to the SLSHare with the account that we use for joining the domain, that account connects without any issue.
In the ZTIGather.log what we see when we compare existing logs taken from the local drive on a machine that’s been imaged or in the process of imaging, to the logs up on the server that were centrally stored shows that the Property
SLSHare is now = \\<servername>\Logs$
In the logs extracted from the local drives(..Windows\CCM\Logs\) of machines that have been imaged in the past couple of months, that Property value does not exist, but we also see the following highlighted errors in the logs
which are not in the logs centrally stored on the server from a couple of months back. We have verified that the file
Microsoft.BDD.Utility.dll is in the correct location for
… Toolkit Package\Tools\x86 and …Toolkit Package\Tools\x64.
Finished getting network info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Getting DP info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Unable to determine ConfigMgr distribution point ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Finished getting DP info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Getting WDS server info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Unable to determine WDS server name, probably not booted from WDS. ZTIGather 11/5/2014 10:29:11
AM 0 (0x0000)
Finished getting WDS server info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
Property HostName is now = MININT-PUPTGTH ZTIGather 11/5/2014 10:29:11 AM
0 (0x0000)
Getting asset info ZTIGather 11/5/2014 10:29:11 AM 0 (0x0000)
FindFile: The file x86\Microsoft.BDD.Utility.dll could not be found in any standard locations. ZTIGather
11/5/2014 10:29:13 AM 0 (0x0000)
FindFile(...\Microsoft.BDD.Utility.dll) Result : 1 ZTIGather 11/5/2014
10:29:13 AM 0 (0x0000)
RUN: regsvr32.exe /s "" ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FindFile: The file x64\Microsoft.BDD.Utility.dll could not be found in any standard locations. ZTIGather
11/5/2014 10:29:13 AM 0 (0x0000)
FindFile(...\Microsoft.BDD.Utility.dll) Result : 1 ZTIGather 11/5/2014
10:29:13 AM 0 (0x0000)
RUN: regsvr32.exe /s "" ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FAILURE (Err): 429: CreateObject(Microsoft.BDD.Utility) - ActiveX component can't create object ZTIGather
11/5/2014 10:29:13 AM 0 (0x0000)
Property AssetTag is now = No Asset Information ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property SerialNumber is now = R900VB6B ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property Make is now = LENOVO ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property Model is now =*** ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property Product is now =*** ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property UUID is now = 08136381-5318-11CB-8777-F9DA97025E14 ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
Property Memory is now = 7851 ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property Architecture is now = X86 ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property ProcessorSpeed is now = 2701 ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
Property CapableArchitecture is now = AMD64 X64 X86 ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property IsLaptop is now = True ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
Property IsDesktop is now = False ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property IsServer is now = False ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property IsUEFI is now = False ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property IsOnBattery is now = False ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property SupportsX86 is now = True ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property SupportsX64 is now = True ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property SupportsSLAT is now = True ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Finished getting asset info ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Getting OS SKU info ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Unable to determine Windows SKU while in Windows PE. ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Determining the Disk and Partition Number from the Logical Drive X:\windows ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property OriginalArchitecture is now = ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Getting virtualization info ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
FindFile: The file x86\Microsoft.BDD.Utility.dll could not be found in any standard locations. ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FindFile(...\Microsoft.BDD.Utility.dll) Result : 1 ZTIGather 11/5/2014
10:29:13 AM 0 (0x0000)
RUN: regsvr32.exe /s "" ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FindFile: The file x64\Microsoft.BDD.Utility.dll could not be found in any standard locations. ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FindFile(...\Microsoft.BDD.Utility.dll) Result : 1 ZTIGather 11/5/2014
10:29:13 AM 0 (0x0000)
RUN: regsvr32.exe /s "" ZTIGather 11/5/2014 10:29:13 AM
0 (0x0000)
FAILURE (Err): 429: CreateObject(Microsoft.BDD.Utility) - ActiveX component can't create object ZTIGather
11/5/2014 10:29:13 AM 0 (0x0000)
FAILURE (Err): 424: GetVirtualizationInfo for Gather process - Object required ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Connection succeeded to MicrosoftVolumeEncryption ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
There are no encrypted drives ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Property IsBDE is now = False ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Processing the phase. ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Determining the INI file to use. ZTIGather 11/5/2014 10:29:13 AM 0 (0x0000)
Finished determining the INI file to use. ZTIGather 11/5/2014 10:29:13
AM 0 (0x0000) -
CC clean up tool saying some files did not get cleaned close browser and run again
When I run the clean up tool saying some files did not get removed try closing browser and run again did not have browser open CC will not start spinning progress wheel but never connects
contact adobe support by clicking here and, when available, click 'still need help', https://helpx.adobe.com/contact.html
-
Weblogic Access Logs not getting generated / updated only for Admin server
Hi All,
I have a query ,
We recently noticed that the weblogic access logs for our admin server are not getting generated.
However we checked that the access logs are getting generated for the managed servers that we have.
There is not much difference between the logging settings between the admin and the managed servers.
We thought that there might be some problem with the buffering and that the data might not be written to the files immediately.
So after researching we found the parameter "-Dweblogic.logging.bufferSizeKB=0" and added that to the java options however it did not make any difference.
Also we tried modifying the config script as ,
<server>
<web-server>
<web-server-log>
<buffer-size-kb>0</buffer-size-kb>
</web-server-log>
</web-server>
</server>
However no luck .....
We are using weblogic 9.2 MP3 and think there might be some bug with this version , however its hard to believe that the logs are generated and updated for managed servers and not for the admin servers.
The only thing we notice in the access logs of the admin server is 404 errors.
Any suggestions ?
Regards,
Stacey.This has come up recently here:
access log not writing to disk in a timely fashion
I didn't find that buffer-size-kb capability in the docs in 9.2. I recommend checking with support. -
Move Transactions are not getting Costed
Hi,
We have defined an organization which uses Standard Costing.
When a 'MOVE' is performed through 'MOVE TRANSACTIONS'screen,the move operation should be costed based on the resources allocated to that standard operation which
is not happening.The Resource 'CHARGE TYPE'(in BOM RESOURCES SCREEN) is set to
'PO Receipt', 'COSTED' is enabled and 'ACTIVITY' is set to 'Move'.
After performing Move transactions, when we query 'View Material Transactions Screen'(Reason,Reference Tab), the 'COSTED' flag shows 'NO' even after invoking the required managers.
We are using ver.11.5.5 (DMF-G).
I hope somebody responds to this!!
Thanks in Advance,
KiranHi Bala,
Thanx for responding!!
The Cost manager is active and infact all our resource transactions are based
on OSP items.None of the transactions are getting costed.In fact we have migrated from 10.7sc(FES-Factory Execution System) to 11.5.5(OSFM) and this problem is only with the migrated one(It's fine with 'Vision').I found that the data in the 'WIP_COST_TXN_INTERFACE' is not getting processed & flushed.All the records in that table remain as it is with process_status '1'.The Cost manager is up & active.Any suggestions or views from you would be appreciated.
Thanks!
Kiran -
Table space not getting cleaned after using free method (permanent delete)
Hi ,
We are using the free method of the LIB OBJ, to permanently delete the objects. As per documentation, the ContentGarbageCollectionAgent will be cleaning the database which runs in a scheduled mode. But the log of that ContentGargabageCollectionAsgent shows, all zero for objects without reference, objects cleared,etc. I.e the table space remains the same before and after deleteing all the contents in the cmsdk database. But the agent is running as per the schedule but just comes out doing nothing.
Can anbody put some light on this issue.
thanks
Raj.Hi Matt,
Thanks for replying. It's been a very long time waiting for you ;)
---"Are you running the 9.2.0.1, 9.2.0.2, or 9.2.0.3 version of the Database?"
we are using 9.2.0.1
---"If you installed the CM SDK schema in the "users" tablespace ......."
Yes we are using USERS tablespace for our Development.
I ran the query. The result is:
SYSTEM MANUAL NOT AFFECTED
USERS MANUAL NOT AFFECTED
CTXSYS_DATA MANUAL NOT AFFECTED
CMSDK1_DATA MANUAL NOT AFFECTED
(USERS belongs to develpoment cmsdk schema. And CMSDK1 for Prod CMSDK schema)
From the results I see only "Manual", but still I don't see the tablespace size being coming down. Both table space sizes (USER and CMSDK1) always grows higher and higher.
Also to let you know, We use ORACLE EM Console (Standalone) application to see the oracle databse information online. Will there be any thing to do with the tool we use to see the table space sizes. We make sure we always refresh it before making a note.
So is there anything else I can see. Once I saw the ContentGarbageCollection agent to free 1025 objects and deleted 0 objects. But I don't see any change in the table space size. I am little confused b/w freed and deleted.
thanks once again for your response Matt.
-Raj. -
[IDS CS3 WIN] .tmp and .idlk files not getting cleaned up
I'm using a C# Soap client with CS3 Server on Windows. My script is opening a template file, importing a tagged file, then saving an ID document and generating a PDF file. The script closes the document before quiting, but I'm seeing a trail of .idlk and .tmp files left behind on the server.
Example: I'm starting with the following files... importTemplate.indd and importTest.txt. The script opens importTemplate.indd, imports importTest.txt and creates importTest.indd and importTest.pdf, then closes the document.
For testing purposes, these files are all located in the same location. What I see in the directory after the script has run (successfully) now includes the following:
tes493.tmp
~importtemplate~uvw2ve.idlk
~importtest~borbth.idlk
~importtest~myhstu.idlk
I am closing the document with the following:
app.documents.item(0).close(SaveOptions.no);
I could understand if the document wasn't getting closed that I would see one .idlk file, but why 3?
Has anyone else encountered this problem?Hi again
does the '3 days ago' actually set the time? so '1 week ago' would be valid too?Ah, was afraid you might ask that
'3 days ago' does "set the time" as you say. '1 week ago' might work, but the man pages are not particularly helpful on what works and what doesn't. The find man page refers you to the cvs man page, which says:
However, cvs currently accepts a wide variety of other date formats. They are intentionally not documented here in any detail, and future versions of cvs might not accept all of them.But that gives an example of "1 hour ago" being valid, so I tried a few variations
wacking whole directories scares me... especially since there are live directories in here that I think shouldn't be deleted... can I take this part out?Yes, you can leave out the complete second "find" command if you wish. But applications shouldn't leave files or folders lying around in /tmp or /var/tmp and expect them to still be there when they come back!
excuse my ignorance what language is this?This is the Bourne Shell or more accurately bash. You can try reading the man page, but if you're interested I think you'd find an introductory shell scripting book a lot more readable!
My weekly hasn't run since Nov 18th but the Monthy and daily both say Dec 1st!!!No problem. My weekly hasn't run since Nov 15th. It will eventually get run, after about a week of cpu time (not sleeping).
Now that the periodic scripts are managed by launchd, rather than cron, they do eventually get run. Unless you are running a busy server with hundreds of users, rotating the log files is no big deal with today's typical disk sizes.
Maybe you are looking for
-
How To Connect My PB To An iBook...
My cousin just bought a new ibook - I'll be going to over to help him set it up and it would be nice to transfer some of the family pics / files I have on my powerbook over to his machine to load up into iphoto etc. I'd like to find a way of connecti
-
Error while saving Excise invoice and also when u look into balances
Hi CIN Gurus, We have added S/H Ecess in the Pricing procedure and done the settings in excise part also. The values are flowing into J1iin screen but while saving: Two different issues: (SD) While saving: a. Balance in Transaction currency b
-
Hello, I hope the experts here can advise me on which development approach for a new .net web application that I am to begin development shortly. I have the back end stored procedures complete and now I will build a .net front end. This front end
-
I think that is possible to do this but I don't find it. I would like a stacked graphic but I want to say which is the first part, which is the second, etc... I move measures in answers but I don't get it. Any solution??
-
Problem to my N70 please help me
My N70 fails to function at regular interval(phone fully dead and not even able to swith off) and restarts after some time.Again after some minutus it repeats..please help me