BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED
Hello,
To generate the log report, /VIRSA/ZVFATBAK program is scheduled on hourly basis but some time report doesn't get generated and if we see the background job then it shows sucessfully finished.
If we see the maually the log report for FFID then below error message is displayed.
" BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED"
Can anyone guide me to solve the issue.
Thanks in advance.
Best Regards,
Prashant Dubey
Hi,
once chk the status of the job by selecting that and check job status(cltr+shift_f12)
since it was periodically scheduled job there will be a RELEASED job after every active job..
so try to copy that into another job using copy option and give some new name which u have to remember...
the moment u copy u can find the same copied job in SCHEDULED status...
from here, try to run it again on hourly basis....
After copying the job u can unschedule the old released job to scheduled otherwise 2 will run at a time...
rgds,
Similar Messages
-
GRC FF 5.1 BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED
I've had FF installed in prod for 4 days. I just now got around to running the /VIRSA/ZVFATBAK job. However it only generates the logs for the last 24 hours. I know when you schedule the batch run to run hourly this will be resolved but I wanted to run the /VIRSA/ZVFATBAK job manually the first time because I knew it would take longer than an hour. Is there a variant I can use in SE38 to get these earlier logs generated?
thanks for your help
Dave woodYou can create your variants in SE38. It does not come up with the programs. For Spools, you can get upto 3 days of logs while for the work logs, it will be a week max. But, i think you can change the system parameters in such a way you can retrieve the logs of the jobs for more than a week.
Thanks
Sudhan Shan -
Could not open log file in Win2000 sp3 Message in console
I have configured log file for ias. Then restarted my Application server. It is giving Error message "Could not open log file ''logs/logVal.10817" in kas window. But logs files are there (created by the server itself) in the logs floder.
I have configured logs for two systems.
One system is noted all messages in the file
Second system did not not any messages.
But the files are existed in the logs folder in both the systems.
I need to configure logs for ias,Kjs and KXS also.
Please suggest me regarding this.
thanks
sudheerHi,
I'm not sure what operation are you trying to, can you please confirm that ? Please check what kind of messages did you try to log, is it only errors ? or all errors & warning or all messages ?. If it were only errors and warnings, then there is a possibility that the server did not encounter any of this, due to which the log file can be empty.
Regards
Raj -
Cannot publish get error message - log file not being created
When trying to publish a FlashHelp project, I get an error
message window that says "Publishing has been cancelled. Failed to
create file: (project name).log "
When I click okay in the message window, the publishing
process stops. However, if I look in the SSL folder, I see the log
file. It is a text file.
I had this problem in January 09 but it seemed to be an issue
with the password and path in the FTP command window. I fixed it
and it worked fine. However, I haven't published since the end of
January. Now, when I try to publish, it is giving me the same error
message. I checked and reviewed the FTP window fields and they are
fine. But I'm still getting the error message and can't publish.
Why?
I need to get this problem fixed ASAP and ensure that it
doesn't occur again. What's strange is that I've got 3 other
projects and this is the only 1 that gets this error message.Yes, the generation worked. I checked the log file that
worked from the time it worked before and it seems to be the same
as the log file that is generated when I get the error message.
I created a new FlashHelp layout and got the same error
message. What's really weird is there is a log file in the SSL
folder but when you click OK in the error message, it stops the
publish function.
Last time I had to blow away the cpd file as if this was a
corrupt project. But that gets to be painful. As I use templates to
put change dates in the footers of topics and templates get lost
when you blow away the cpd.
Any other thoughts? -
SQL Server 2012 Reorg Index Job Blew up the Log File
We have a maintenance plan that nightly (1) runs dbcc checkdb on all databases, (2) reorgs indexes on all databases, compacting large objects, (3) updates statistics, etc. There are three user databases, one large, one medium, one small. Usually it uses
a little more than 80% of the medium database's log, set to 6,700 MB. Last night the reorg index step caused the log to increase to almost 14,000 MB and then blew up because the maximum file size was set to 14,000 MB, one of the alter index commands failed
because it ran out of log space. (Dbcc checkdb step ran successfully.) Anyone have any idea what might cause this? There is one update process on this database, it runs at 3 AM. The maintenance plan runs at 9 PM and completes by 1 AM. The medium database has
a 21,000 MB data file, reserved space is at about 10 GB. This is a SQL 2012 Standard SP 2 running on Windows 2012 Server Standard.I personally like to shrink the log files once the indexes have been rebuilt and before switching back to full recovery, because as I'm going to take a full backup afterwards, having a small log file reduces the size of the backup.
Do you grow them afterwards, or do you let the application waste time on that during peak hours?
I have not checked, but I see no reason why the backup size would depend on the size of the log file - it's the data in the data file you back up, not the log file.
I would say this is highly dubious.
Erland Sommarskog, SQL Server MVP, [email protected]
Yeah I let the application allegedly "waste" a few milisseconds a day autogrowing the log file. Common, how long do you think it takes for a log file to grow a few GB on most storage systems nowadays? As long as you set an appropriate autogrow
interval so your log file doesn't get too fragmented (full of VLFs), you'll be perfectly fine in most situations.
Lets say you have a logical disk dedicated to log file storage, but it is shared across multiple databases within the instance. Having allocated space for the log files means there will be not much free space left in the disk in case ANY database needs more
space than the others due to a peak in transactional workload, even though other databases have unused space that could have been used.
What if this same disk, for some reason, is also used to store the tempdb log file? Then all applications will become unstable.
These are the main reasons I don't recommend people blindly crucify keeping log files small when possible. I know there are many people who disagree and I'm aware of their reasons. Maybe we just had different experiences about this subject. Maybe people
just haven't been through the nightmare of having a corrupted system database or a crashed instance because of insuficient log space in the middle of the day.
And you are right about the size of the backup, I didn't put it correctly. It isn't the size of the backup that gets smaller (although the backup operation will run faster, having tested this myself), but the benefit from backing up a database with a small
log file is that you won't need the extra space to restore it in a different environment such as a BI or DEV server, where recuperability doesn't matter and the database will be on simple recovery mode.
Restoring the database will also be faster.
Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
is an undocumented behavior and should not be relied upon. -
Job number from alert log file to information
Hello!
I have a question about job numbers in alert log file. Today one of our Oracle 10g R2 [10.2.0.4] RAC nodes crashed. After examining alert log file for one of the nodes I saw a lot of messages like:
Tue Jul 26 11:52:43 2011
Errors in file /u01/app/oracle/admin/zeme/bdump/zeme2_j002_28952.trc:
ORA-12012: error on auto execute of job *20627358*
ORA-12705: Cannot access NLS data files or invalid environment specified
Tue Jul 26 11:52:43 2011
Errors in file /u01/app/oracle/admin/zeme/bdump/zeme2_j001_11018.trc:
ORA-12012: error on auto execute of job *20627357*
ORA-12705: Cannot access NLS data files or invalid environment specified
Tue Jul 26 11:52:43 2011
Errors in file /u01/app/oracle/admin/zeme/bdump/zeme2_j000_9684.trc:
ORA-12012: error on auto execute of job *20627342*
ORA-12705: Cannot access NLS data files or invalid environment specified
After examining trc files I have found no further information about error except session ids.
My question is: how to find what job caused these messages to appear in alert log file.
How do I map number in alert log file to some "real" information (owner, statement executed, schedule)?
Marx.Sorry for the delay
Try this to find the job :
select job, what from dba_jobs ;
How do I find NLS_LANG version?SQL> show parameter NLS_LANG
Do you mean ALTER SESSION inside a job?I meant anywhere, but your question is better.
ORA-12705 - Common Reasons and How to Resolve Them [ID 158654.1]
If OS is Windows lookout for NLS_LANG=NA in the registry
Is it possible you are doing this somewhere ?
ALTER SESSION SET NLS_DATE_FORMAT = 'RRRR-MM-DD\"T\"HH24:MI:SS';NLS database settings are superseded by NLS instance settings
SELECT * from NLS_SESSION_PARAMETERS;
These are the settings used for the current SQL session.
NLS_LANG could be set in a profile for example.
NLS_LANG=_AMERICA.WE8ISO8859P1 ( correct )
NLS_LANG=AMERICA.WE8ISO8859P1 ( Incorrect )
you need to set the "_" as separator.
Windows
set NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
Unix
export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
mseberg
Edited by: mseberg on Jul 28, 2011 3:51 PM
Edited by: mseberg on Jul 29, 2011 4:05 AM -
Error in batch processing: background job cannot be scheduled
Error in batch processing: background job cannot be scheduled in transaction code jpmg0. How to resolve the same?
Hello,
Check system log and also check work process trace, to know the error.
I feel that it could be Authorization issue, you don't have sufficient authorization to do.
Hope this helps.
Regards
venkata
Edited by: venkata emandi on Sep 12, 2011 8:13 AM -
Could not load plugins: File not found. Any solution to that?
I am using a macbook-pro 2009 which, recently upgraded to Mavericks. Since the upgrading I have been unable to open "video clips" from several sites, with the message : "Could not load plugins: File not found". I have made sure of getting the latest Adobe flashplayer and Java, but the problem is still there. Does anyone experience the same? Any solution, please .....
If you use extensions (Firefox/Tools > Add-ons > Extensions) that can block content (e.g. Adblock Plus, NoScript, Flash Block, Ghostery) then make sure that such extensions aren't blocking content.
Start Firefox in <u>[[Safe Mode|Safe Mode]]</u> to check if one of the extensions (Firefox/Tools > Add-ons > Extensions) or if hardware acceleration is causing the problem (switch to the DEFAULT theme: Firefox/Tools > Add-ons > Appearance).
*Do NOT click the Reset button on the Safe Mode start window.
*https://support.mozilla.org/kb/Safe+Mode
*https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes -
Background Job cancelling with error Data does not match the job definition
Dear Team,
Background Job is getting cancelled when I run a Job on periodically but the same Job is executing perfectly when I run it manually(repeat scheduling) .
Let me describe the problem clearly.
We have a program which picks up files from an FTP server and posts the documents into SAP. We are scheduling this program as a background Job daily. This Job is running perfectly if the files contain no data. But if the file contains data the JOb is getting cancelled with the following messages.
And also the same Job is getting executed perfectly when repeat scheduling is done ( even for files with data).
Time Message text Message class Message no. Message type
03:46:08 Job PREPAID_OCT_APPS2_11: Data does not match the job definition; job terminated BD 078 E
03:46:08 Job cancelled after system exception ERROR_MESSAGE 00 564 A
Please help me in resolving this issue.
Thanks in advance,
Sai.hi,
If you have any GUI function modules used in the program of job
you cannot run it in background mode. -
Log shipping is not restoring log files ata particular time
Hi,
I have configured log shipping and it restores all the log files upto a particular time after which it throws error and not in consistent sate. I tried deleting and again configuring log shipping couple of times but no success. Can any one tell me
how to prevent it as I have already configured log shipping from another server to same destination server and it is working fine for more than 1 year? The new configuration is only throwing errors in restoration job.
Thanks,
PreethaMessage
2014-07-21 14:00:21.62 *** Error: The log backup file 'E:\Program Files\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\Qcforecasting_log\Qcforecasting_20140721034526.trn' was verified but could not be applied to secondary database 'Qcforecasting'.(Microsoft.SqlServer.Management.LogShipping)
2014-07-21 14:00:21.62 Deleting old log backup files. Primary Database: 'Qcforecasting'
2014-07-21 14:00:21.62 The restore operation completed with errors. Secondary ID: '46b20de0-0ccf-4411-b810-2bd82200ead8'
2014-07-21 14:00:21.63 ----- END OF TRANSACTION LOG RESTORE -----
The same file was tried thrice and it threw error all the 3 times.
But when I manually restored the Qcforecasting_20140721034526 transaction log it worked. Not sure why this is happening. After the manual restoration it worked fine for one run.ow waiting for the other to complete.
This seems strange to me error points to fact that backup was consistent but could not be applied because may be restore process found out that log backup was not it correct sequence or another log backup which was accidentally taken can be applied.
But then you said you can apply this manually then this must be related to permission. Because if it can restore manually it can do it with job unless agent account has some permission issue.
Ofcourse more logs would help
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles -
Archived log file not displaying
While navigating around the "home" page for OCS as an administrator...I was trying to run a report under Reports>Conferences>Diagnostics.
The links says:
Click the link below to view comprehensive conference diagnostics. To see the log file correctly, use Internet Explorer 6.0 or higher.
I am using IE 6 and the page shows up as being done...but it is blank. Any idea what is wrong? The URL reads:
https://mywebserver/imtapp/logs/imtLogs.jsp?fileName=D:/ocs_onebox/mtier/imeeting/logs/sessions/12.20.2004/10000-clbsvr_OCS_home_mid.mywebserver.imt-collab.0-06_34_01.xml
The file is there on the filesystem.
TIA.Stages means Transformations in Data flow...
Transformation names are not displaying correctly in log file.
for example if i given name as "TC_table_name" for Table compare Transformation then its displaying only "Table Comparison" in Log file -
Log files not being removed.
Hello,
I've upgraded an application from BerkeleyDB 5.1.25 to 5.3.21, and after that, log files are no more automatically removed. This is the only change in the application. It's an application written in C.
The environment of the application is created with the flag DB_LOG_AUTO_REMOVE
dbenv->log_set_config(dbenv, DB_LOG_AUTO_REMOVE, TRUE).
The application has a thread to periodically checkpoint the data
dbenv->txn_checkpoint(dbenv, 0, 0, 0)
So far, so good, with version 5.1.25, this was enough to remove unused log files (I don't need to be able to do catastrophic recovery). But this doesn't work anymore with version 5.3.21.
I I run db_archive (no options), it shows nothing, suggesting that all log files are still needed. But if I run db_hot_backup on the database, all but the last logfiles are removed (on the backup) as wanted.
Rem : Usually, I don't want to run db_archive or any external tool, to remove unused log files. I hope what is inside the application is enough to remove unused log files.
Is this something known, something changed or can you suggest me something to look for ?
Thanks for your help
José-Marcio
Edited by: user564597 on Mar 24, 2013 6:35 PM
Edited by: user564597 on Mar 24, 2013 6:38 PM
Edited by: user564597 on Mar 25, 2013 8:57 AMthank you for giving us a test program. This helped tremendously to fully understand what you are doing. In 5.3 we fixed a bug dealing with the way log files are archived in an HA environment. What you are running into is the consequences of that bug fix. In the test program you are using DB_INIT_REP. This is the key to use that you want an HA environment. With HA, there is a master and some number of read only clients. By default we treat the initiating database as the master. This is what is happening in our case. In an HA (replicated) environment, we cannot archive log files until we can be assured that the clients have applied the contents of that log file. Our belief is that you are not really running in an HA environment and you do not need the DB_INIT_REP flag. In our initial testing where we said it worked for us, this was because we did not use the DB_INIT_REP flag, as there was no mention of replication being needed in the post.
Recommendation: Please remove the use of the DB_INIT_REP flag or properly set up an HA environment (details in our docs).
thanks
mike -
Empty/underutilized log files not removed
I have an application that runs the cleaner and the checkpointer explicitly (instead of relying on the database to do it).
Here are the relevant environment settings: je.env.runCheckpointer=false, je.env.runCleaner=false, je.cleaner.minUtilization=5, je.cleaner.expunge=true.
When running the application, I noticed that the few dozen log files have been removed, but later (even the cleaner was executed at regular intervals), no more log files were removed.
I have run the DbSpace utility on the environment and found the following result:
File Size (KB) % Used
00000033 97656 0
00000034 97655 0
00000035 97656 0
00000036 97656 0
00000037 97656 0
00000038 97655 2
00000039 97656 0
0000003a 97656 0
0000003b 97655 0
0000003c 97655 0
0000003d 97655 0
0000003e 97655 0
0000003f 97656 0
00000040 97655 0
00000041 97656 0
00000042 97656 0
00000043 97656 0
00000044 97655 0
00000045 97655 0
00000046 97656 0
This goes on for a long time. I had the database tracing enabled at CONFIG level. Here are the last lines of the log just before the last log file (0x32) is removed:
2009-05-06 08:41:51:111:CDT INFO CleanerRun 49 on file 0x30 begins backlog=2
2009-05-06 08:41:52:181:CDT SEVERE CleanerRun 49 on file 0x30 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206347 nINsObsolete=6365 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199971 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:41:52:182:CDT INFO CleanerRun 50 on file 0x31 begins backlog=1
2009-05-06 08:41:53:223:CDT SEVERE CleanerRun 50 on file 0x31 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205475 nINsObsolete=6319 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199144 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:41:53:224:CDT INFO CleanerRun 51 on file 0x32 begins backlog=0
2009-05-06 08:41:54:292:CDT SEVERE CleanerRun 51 on file 0x32 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205197 nINsObsolete=6292 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198893 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:42:24:300:CDT INFO CleanerRun 52 on file 0x33 begins backlog=1
2009-05-06 08:42:24:546:CDT CONFIG Checkpoint 963: source=api success=true nFullINFlushThisRun=13 nDeltaINFlushThisRun=0
2009-05-06 08:42:24:931:CDT SEVERE Cleaner deleted file 0x32
2009-05-06 08:42:24:938:CDT SEVERE Cleaner deleted file 0x31
2009-05-06 08:42:24:946:CDT SEVERE Cleaner deleted file 0x30
Here are a few log lines right after the last log message with cleaner deletion (until the next checkpoint):
2009-05-06 08:42:25:339:CDT SEVERE CleanerRun 52 on file 0x33 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204164 nINsObsolete=6277 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197865 nLNsCleaned=11 nLNsDead=0 nLNsMigrated=0 nLNsMarked=11 nLNQueueHits=9 nLNsLocked=0
2009-05-06 08:42:25:340:CDT INFO CleanerRun 53 on file 0x34 begins backlog=0
2009-05-06 08:42:26:284:CDT SEVERE CleanerRun 53 on file 0x34 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=203386 nINsObsolete=6281 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197091 nLNsCleaned=2 nLNsDead=2 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:42:56:290:CDT INFO CleanerRun 54 on file 0x35 begins backlog=4
2009-05-06 08:42:57:252:CDT SEVERE CleanerRun 54 on file 0x35 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205497 nINsObsolete=6312 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199164 nLNsCleaned=10 nLNsDead=3 nLNsMigrated=0 nLNsMarked=7 nLNQueueHits=6 nLNsLocked=0
2009-05-06 08:42:57:253:CDT INFO CleanerRun 55 on file 0x39 begins backlog=4
2009-05-06 08:42:58:097:CDT SEVERE CleanerRun 55 on file 0x39 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204553 nINsObsolete=6301 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198238 nLNsCleaned=2 nLNsDead=0 nLNsMigrated=0 nLNsMarked=2 nLNQueueHits=1 nLNsLocked=0
2009-05-06 08:42:58:098:CDT INFO CleanerRun 56 on file 0x3a begins backlog=3
2009-05-06 08:42:59:261:CDT SEVERE CleanerRun 56 on file 0x3a invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204867 nINsObsolete=6270 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198586 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:42:59:262:CDT INFO CleanerRun 57 on file 0x36 begins backlog=2
2009-05-06 08:43:02:185:CDT SEVERE CleanerRun 57 on file 0x36 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206158 nINsObsolete=6359 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199786 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:43:02:186:CDT INFO CleanerRun 58 on file 0x37 begins backlog=2
2009-05-06 08:43:03:243:CDT SEVERE CleanerRun 58 on file 0x37 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206160 nINsObsolete=6331 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199817 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:43:03:244:CDT INFO CleanerRun 59 on file 0x3b begins backlog=1
2009-05-06 08:43:04:000:CDT SEVERE CleanerRun 59 on file 0x3b invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206576 nINsObsolete=6385 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200179 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:43:04:001:CDT INFO CleanerRun 60 on file 0x38 begins backlog=0
2009-05-06 08:43:08:180:CDT SEVERE CleanerRun 60 on file 0x38 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205460 nINsObsolete=6324 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=194125 nLNsCleaned=4999 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=4999
2009-05-06 08:43:08:224:CDT INFO CleanerRun 61 on file 0x3c begins backlog=0
2009-05-06 08:43:09:099:CDT SEVERE CleanerRun 61 on file 0x3c invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206589 nINsObsolete=6343 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200235 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:43:24:548:CDT CONFIG Checkpoint 964: source=api success=true nFullINFlushThisRun=12 nDeltaINFlushThisRun=0
I could not see anything fundamentally different between the log messages when log files were removed and when they were not. The DbSpace utility confirmed that there are plenty of log files under the minimum utilization, so I can't quite explain while the log file removal stopped all of a sudden.
Any help would be appreciated (JE version: 3.3.75).Hi Bertold,
My first guess is that one or more transactions have accidentally not been ended (committed or aborted), or cursors not closed.
A clue is the nLNsLocked=4999 in the second set of trace messages. This means that 4999 records were locked by your application and were unable to be migrated by the cleaner. The cleaner will wait until these record locks are released before deleting any log files. Records locks are held by transactions and cursors.
If this doesn't ring a bell and you need to look further, one thing you can do is print the EnvironmentStats periodically (System.out.println(Environment.getStats(null))). Take a look at the nPendingLNsProcessed and nPendingLNsLocked. The former is the number of records the cleaner attempts to migrate because they were locked earlier. The latter is the number that are still locked and cannot be migrated.
--mark -
Background job in Released status and does not run. What is the problem?
Hi Experts,
I ran a ABAP report as a background job. But when I see the job status in SM37 I found that it is "Released" but the job does not run further, It remains in "Released" state. I have given the correct variant and start condition "immediate" . Moreover there is no spool and log as well.
The ABAP report runs fine in forground. It used to run in background few days back.
What could be the problem? Kindly help!
Thanks
GopalHi Bob,
But I checked after 2 days still the job status is in Released state. There were some other jobs that ran successfully. So atleast after two days I hope the job should have run. I can't even delete it. What shall I do?
Thanks
Gopal -
[SOLVED] dvd drive not working, log file filled with error.
hi,
i dont know if this is something to do with the latest kernel update or something else that may have occured.
but my dvd writer has stopped working. no light on the front, pressing the button does not open the tray.
it wtill works fine when i boot into windows (dual boot with XP) so it's not a hardware issue.
also my everything.log file is filling up with the following every two seconds!
Mar 1 19:27:56 adax hda: status error: status=0x59 { DriveReady SeekComplete DataRequest Error }
Mar 1 19:27:56 adax hda: status error: error=0x00 { }
Mar 1 19:27:56 adax ide: failed opcode was: unknown
Mar 1 19:27:56 adax hda: drive not ready for command
Mar 1 19:27:56 adax hda: status error: status=0x59 { DriveReady SeekComplete DataRequest Error }
Mar 1 19:27:56 adax hda: status error: error=0x00 { }
Mar 1 19:27:56 adax ide: failed opcode was: unknown
Mar 1 19:27:56 adax hda: drive not ready for command
Mar 1 19:27:56 adax hda: status error: status=0x59 { DriveReady SeekComplete DataRequest Error }
Mar 1 19:27:56 adax hda: status error: error=0x00 { }
Mar 1 19:27:56 adax ide: failed opcode was: unknown
Mar 1 19:27:56 adax hda: drive not ready for command
Mar 1 19:27:56 adax hda: status error: status=0x59 { DriveReady SeekComplete DataRequest Error }
Mar 1 19:27:56 adax hda: status error: error=0x00 { }
Mar 1 19:27:56 adax ide: failed opcode was: unknown
Mar 1 19:27:56 adax hda: drive not ready for command
can someone please help me before i run out of disk space for my logs!
thanks,
ad.
Last edited by adax (2008-03-01 23:25:37)Hello,
I strongly suspect that you need to change 'ide' to 'pata' in your /etc/mkinitcpio.conf
#HOOKS="base udev autodetect ide scsi sata usb keymap filesystems"
HOOKS="base udev autodetect pata scsi sata usb keymap filesystems"
Then recreate with:
mkinitcpio -g /boot/kernel26.img
Last edited by colnago (2008-03-01 21:21:37)
Maybe you are looking for
-
Flash CS4 10.0.2 AIR apps crashing on Mac OS 10.6.2
Upgraded CS4 to 10.0.2 and installed the Air2 SDK per Adobe: http://labs.adobe.com/wiki/index.php/AIR_2:Release_Notes Now when I try to compile an Application with AIR the whole IDE crashes. Is there something else I need to do to get it to work? Tha
-
How to update the bucketset of business rules in MDS through Rules SDK
How to update the bucketset of business rules in MDS through Rules SDK. Any sample code which will help me........ :) Is it possible to expose a Business Rule as webservice which was created with the help of Java fact? Edited by: 984804 on Jan 29, 20
-
[Gnome - GDM] Locale/Charset Problem
Hi everybody, I´m using Gnome with GDM as Display Manager. I got a problem with my keyboard settings, but don´t know why The first charset/locale option is set in the /boot/grub/menu.lst kernel /vmlinuz26 lang=de locale=de_DE.UTF8 root=/dev..... I us
-
Best Media Player for .dvdmedia files
I have ripped several DVDs to the RipIt software file format of .dvdmedia Does anyone know of any WD media centers that are able to read and play them from my 3TD external hd?
-
Jdk131 of wls 6.1 sp2 on Solaris 7 doesnt support -server option?
Greetings I installed, wls 6.1 sp2 on Solaris 7. But it seems the built in jdk131 that installed doesnt support -server option. i checked the jdk131/jre/bin folder, but i dont see any client, server folders...how is this possible? how to correct this