Question regarding a 95 G config/log file: LabView_32_11.0_Lab.Admin_cur.txt

Hi Everyone,
One of our lab computers running Labview was reported to be running out of storage and I was asked to figure out why. I rifled through some windows folders to find the culprit, specifically folder: c:\users\Lab.Admin\AppData\Local\Temp wherein I found a 95 G file entitled LabView_32_11.0_Lab.Admin_cur.txt I did note that the Lab.Admin is the user name and is also included in the filename so I'm assuming this is some sort of config/log file for the current user.
The file was too large for me to open and look at with any program I had available so I just renamed it, re-started Labview to verify that it would be recreated an then deleted the bloated file. The newly created file has the following inside of itt:
#Date: Wed, Jun 13, 2012 2:49:00 PM
#OSName: Windows 7 Professional
#OSVers: 6.1
#OSBuild: 7600
#AppName: LabVIEW
#Version: 11.0 32-bit
#AppKind: FDS
#AppModDate: 06/22/2011 18:12 GMT
#LabVIEW Base Address: 0x00400000
Can anyone tell me the purpose of this file and what might have caused it to grow to 95 G. I'm just interested in learning how to prevent this from happening again.
Cheers,
Alex
Alexander H. | Software Developer | CLAD
Solved!
Go to Solution.

Yes it is, or rather was, a 95 Gb text file.
I suspect you are correct that it is a crash dump/error log file. It makes sense as this computer has been running a test station for the past year that has been reported as less than stable. I'll keep an eye on that file over the next few days to see if anything is added to it while the station is running.
Thanks for the suggestions,
Alex
Alexander H. | Software Developer | CLAD

Similar Messages

  • Question about how Oracle manages Redo Log Files

    Good morning,
    Assuming a configuration that consists of 2 redo log groups (Group A and B), each group consisting of 2 disks (Disks A1 & A2 for Group A and Disks B1 and B2 for group B). Further, let's assume that each redo log file resides by itself in a disk storage device and that the device is dedicated to it. Therefore in the above scenario, there are 4 disks, one for each redo log file and, each disk contains nothing else other than a redo log file. Furthermore, let's assume that the database is in ARCHIVELOG mode and that the archive files are stored on yet another different set of devices.
    sort of graphically:
        GROUP A             GROUP B
          A1                  B1
          A2                  B2The question is: When the disks that comprise Group A are filled and Oracle switches to the disks in Group B, can the disks in Group A be taken offline, maybe even physically removed from the system if necessary, without affecting the proper operation of the database ? Can the Archiver process be temporarily delayed until the disks (that were removed) are brought back online or is the DBA forced to wait until the Archiver process has finished creating a copy of the redo log file into the archive ?
    Thank you for your help,
    John.

    Hello,
    Dropping Log Groups
    To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before dropping an online redo log group, consider the following restrictions and precautions:
    * An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
    * You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.
    * Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
    SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
    GROUP# ARC STATUS
    1 YES ACTIVE
    2 NO CURRENT
    3 YES INACTIVE
    4 YES INACTIVE
    Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
    The following statement drops redo log group number 3:
    ALTER DATABASE DROP LOGFILE GROUP 3;
    When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
    When using Oracle-managed files, the cleanup of operating systems files is done automatically for you.
    Your Database wont be affected as you can operate with 2 redo log files in each group as The minimum number of redo log files required in a database is two because the LGWR (log writer) process writes to the redo log files in a circular manner. so the process will hang becuase you are having 2 only groups if you want to remove 1 add a third one and make it the current group then remove the one you want to be offline.
    Please refer to:
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/onlineredo.htm#7438
    Kind regards
    Mohamed
    Oracle DBA

  • Can i config log file extension?

    Hi,
    How can i config logfile like this format: log.4c01c0f3
    an how can i config no env files like these: __db.001 ... __db.005
    I have a env not created by me, that env does not have those __db.001....__db.005 files,
    how to config env like that?
    thanks alot.

    Hello,
    What platform/version are you on?
    The logging subsystem and related methods are documented at:
    http://www.oracle.com/technology/documentation/berkeley-db/db/api_reference/C/lsn.html
    I do not know of a method to change the log file name. Perhaps someone else might.
    As the for __db.00X environment region files, these represent shared memory
    regions which by default are created as files in the environment's home
    directory. The region files can be configured to reside in-memory which is perhaps
    why you are not seeing them. The following documentation provides additional
    details:
    http://www.oracle.com/technology/documentation/berkeley-db/db/gsg_txn/C/enabletxn.html#environments
    Thanks,
    Sandra
    Edited by: Oracle, Sandra Whitman on Jun 1, 2010 8:35 AM

  • Question about full backup and Transaction Log file

    I had a query will taking full backup daily won't allow my log file to grow as after taking the full backup I still see the some of the VLF in status 2. It went away when I manually took the backup of log file. I am bit confused shall I
    perform backup of transaction log and full database backup daily to avoid such things in future also until I run shrinkfile the storage space from server won't get reduced right.

    yes, full backup does not clear log file only log backup does. once the log backup is taken, it will set the inactive vlfs in the log file to 0.
    you should perform log backup as the per your business SLA in data loss.
    Go ahead and ask this to yourself:  
    If  a disaster strikes and your database server is lost and your only option is restore it from backup, 
    how much data loss can your business handle??
    the answer to this question is how frequently your log backup should be?
    if the answer is 10 mins, you should have log backups every 10 mins atleast.
    if the answer is 30 mins, you should have log backups every 30 mins atleast.
    if the answer is 90 mins, you should have log backups every 90 mins atleast.
    so, when you restore, you will restore latest fullbackup+differential(latest one taken after restored fullback)
     and all the logbackups taken since the latest(restored full or differential backup).
    there several resources on web,inculding youtube videos, that explain these concepts clearly.i advice you to look at them.
    to release the file space to OS, you should the shrink the file. log file shrink happens from the end upto the point it reaches an active vlf.
    if there are no inactive vlf's in the end,despite how many inactive vlf's the log file has in the begining, the log file is not shrinkable
    Hope it Helps!!

  • Will RMAN delete archive log files on a Standby server?

    Environment:
    Oracle 11.2.0.3 EE on Solaris 10.5
    I am currently NOT using an RMAN repository (coming soon).
    I have a Primary database sending log files to a Standby.
    My Retention Policy is set to 'RECOVERY WINDOW OF 8 DAYS'.
    Question: Will RMAN delete the archive log files on the Standby server after they become obsolete based on the Retention Policy or do I need to remove them manually via O/S command?
    Does the fact that I'm NOT using an RMAN Repository at the moment make a difference?
    Couldn't find the answer in the docs.
    Thanks very much!!
    -gary

    Hello again Gary;
    Sorry for the delay.
    Why is what you suggested better?
    No, its not better, but I prefer to manage the archive. This method works, period.
    Does that fact (running a backup every 4 hours) make my archivelog deletion policy irrelevant?
    No. The policy is important.
    Having the Primary set to :
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBYBut set to "NONE" on the Standby.
    Means the worst thing that can happen is RMAN will bark when you try to delete something. ( this is a good thing )
    How do I prevent the archive backup process from backing up an archive log file before it gets shipped to the standby?
    Should be a non-issue, the archive does not move, the REDO is transported and applied. There's SQL to monitor both ( Transport and Apply )
    For Data Guard I would consider getting a copy of
    "Oracle Data Guard 11g Handbook" - Larry Carpenter (AKA Dr. Paranoid ) ISBN 978-0-07-162111-2
    Best Oracle book I've read in 10 years. Covers a ton of ground clearly.
    Also Data Guard forum here :
    Data Guard
    Best Regards
    mseberg
    Edited by: mseberg on Apr 10, 2012 4:39 PM

  • Need to understand when  redo log files content  is wrote to datafiles

    Hi all
    I have a question about the time when redo log files are wrote to the datafiles
    supposing that the database is in NOARCHIVELOG mode, and all redo log files are filled, the official oracle database documentation says that: *a filled redo log file is available
    after the changes recorded in it have been written to the datafiles* witch mean that we just need to have all the redo log files filled to "*commit*" changes to the database
    Thanks for help
    Edited by: rachid on Sep 26, 2012 5:05 PM

    rachid wrote:
    the official oracle database documentation says that: a filled redo log file is available after the changes recorded in it have been written to the datafiles It helps if you include a URL to the page where you found this quote (if you were using the online html manuals).
    The wording is poor and should be modified to something like:
    <blockquote>
    +"a filled online redo log file is available for re-use after all the data blocks that have been changed by change vectors recorded in the log file have been written to the data files"+
    </blockquote>
    Remember if a data block that is NOT an undo block has been changed by a transaction, then an UNDO block has been changed at the same time, and both change vectors will be in the redo log file. The redo log file cannot, therefore, be re-used until the data block and the associated UNDO block have been written to disc. The change to the data block can thus be rolled back (uncommitted changes can be written to data files) because the UNDO is also available on disc if needed.
    If you find the manuals too fragmented to follow you may find that my book, Oracle Core, offers a narrative description that is easier to comprehend.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • Corrupt log file, but how does db keep working?

    We recently had a fairly devastating outage involving a hard drive failure, but are a little mystified about the mechanics of what went on with berkeleydb which I hope someone here can clear up.
    A hard drive running a production instance failed because of a disk error, and we had to do a hard reboot to get the system to come back up and right itself (we are running RedHat Enterprise). We actually had three production environments running on that machine, and two came back just fine, but in one, we would get this during recovery:
    BDBStorage> Running recovery.
    BerkeleyDB> : Log file corrupt at LSN: [4906][8294478]
    BerkeleyDB> : PANIC: Invalid argument
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__os_stack+0x20) [0x2c23af2380]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__os_abort+0x15) [0x2c23aee9c9]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_panic+0xef) [0x2c23a796f9]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_attach_regions+0x788) [0x2c23aae82c]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_open+0x130) [0x2c23aad1e7]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(__env_open_pp+0x2e7) [0x2c23aad0af]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so [0x2c23949dc7]
    BerkeleyDB> : /usr/local/BerkeleyDB.4.8/lib/libdb_java-4.8.so(Java_com_sleepycat_db_internal_db_1javaJNI_DbEnv_1open+0xbc) [0x2c239526ea]
    BerkeleyDB> : [0x2a99596e77]
    We thought, well, perhaps this is related to the disk error, it corrupted a log file and then died. Luckily (or so we thought) we diligently do backups twice a day, and keep a week's worth around. These are made using the standard backup procedure described in the developer's guide, and whenever we've had to restore them, they have been just fine (we've been using our basic setup for something like 9 years now). However, as we retrieved backup after backup, going back three or four days, they all had similar errors, always starting with [4096]. Then we noticed an odd log file, numbered with 4096, which sat around in our logs directory ever since it was created. Eventually we found a good backup, but the customer lost several days' worth of work.
    My question here is, how could a log file be corrupted for days and days but not be noticed, say during a checkpoint (which we run every minute or so)? Doesn't a checkpoint itself basically scan the logs, and shouldn't that have hit the corrupt part not long after it was written? The system was running without incident, getting fairly heavy use, so it really mystifies me as to how that issue could be sitting around for days and days like that.
    For now all we can promise the customer is that we will automatically restore every backup as soon as it's made, and if something like this happens, we immediately try a graceful shutdown, and if that doesn't come back up, we automatically go back to the 12-hour-old backup. And perhaps we should be doing that anyway, but still, I would like to understand what happened here. Any ideas?

    Please note, I don't want to make it sound like I'm somehow blaming berkeleydb for the outage-- we realize in hindsight there were better things to do than go back to an old backup, but the customer wanted an immediate answer, even if it was suboptimal. I just feel like I am missing something major about how the system works.

  • Why size of archive log file increasing in merge clause

    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong).
    please suggest........
    Edited by: 855516 on Mar 13, 2012 11:18 AM

    855516 wrote:
    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong). No this is not correct that after commit archive log will generate....You know merge statement causes the insert (if data not present already) or update if database is present.. Obviously these operation will generate lots of redo if the amount of data been processed is high.
    If you feel that this operation is causing excessive of redo then root cause analysis should be done...
    For that use Logminer (excellent tool to provide segment level breakdown of redo size). V$logmnr_contens has columns redo block and redo byte address associated with the current redo
    change
    There are some gudlines in order to reduce redos( which may vary in any environment)
    1) check if there are unwanted indexes being used in tables which are refereed in merge. If yes then remove those could bring down the redo
    2) Use global temporary tables to reduce redo (if there is a need to keep data only temporarily in a session)
    3) Use nologging if possible (but see its implications)
    Hope this helps

  • Job number from alert log file to information

    Hello!
    I have a question about job numbers in alert log file. Today one of our Oracle 10g R2 [10.2.0.4] RAC nodes crashed. After examining alert log file for one of the nodes I saw a lot of messages like:
    Tue Jul 26 11:52:43 2011
    Errors in file /u01/app/oracle/admin/zeme/bdump/zeme2_j002_28952.trc:
    ORA-12012: error on auto execute of job *20627358*
    ORA-12705: Cannot access NLS data files or invalid environment specified
    Tue Jul 26 11:52:43 2011
    Errors in file /u01/app/oracle/admin/zeme/bdump/zeme2_j001_11018.trc:
    ORA-12012: error on auto execute of job *20627357*
    ORA-12705: Cannot access NLS data files or invalid environment specified
    Tue Jul 26 11:52:43 2011
    Errors in file /u01/app/oracle/admin/zeme/bdump/zeme2_j000_9684.trc:
    ORA-12012: error on auto execute of job *20627342*
    ORA-12705: Cannot access NLS data files or invalid environment specified
    After examining trc files I have found no further information about error except session ids.
    My question is: how to find what job caused these messages to appear in alert log file.
    How do I map number in alert log file to some "real" information (owner, statement executed, schedule)?
    Marx.

    Sorry for the delay
    Try this to find the job :
    select job, what from dba_jobs ;
    How do I find NLS_LANG version?SQL> show parameter NLS_LANG
    Do you mean ALTER SESSION inside a job?I meant anywhere, but your question is better.
    ORA-12705 - Common Reasons and How to Resolve Them [ID 158654.1]
    If OS is Windows lookout for NLS_LANG=NA in the registry
    Is it possible you are doing this somewhere ?
    ALTER SESSION SET NLS_DATE_FORMAT = 'RRRR-MM-DD\"T\"HH24:MI:SS';NLS database settings are superseded by NLS instance settings
    SELECT * from NLS_SESSION_PARAMETERS;
    These are the settings used for the current SQL session.
    NLS_LANG could be set in a profile for example.
    NLS_LANG=_AMERICA.WE8ISO8859P1     ( correct )
    NLS_LANG=AMERICA.WE8ISO8859P1 ( Incorrect )
    you need to set the "_" as separator.
    Windows
    set NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
    Unix
    export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
    mseberg
    Edited by: mseberg on Jul 28, 2011 3:51 PM
    Edited by: mseberg on Jul 29, 2011 4:05 AM

  • Delete archive log files on physical standby

    I want to delete the archive log files on the phsical standby database when they were applied. The archive log files are in the flash recovery area. So i used
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
    in rman.
    My question:
    Does rman delete the archive log files immediately when they are applied or when the disk space becomes scarce?

    Hi,
    They're marked as reclaimable when applied, so they're still on the disk but when the flah recovery area will become full they'll be deleted to give space to the new one, and so...
    Loïc

  • Calling the Log file (Operating System)  using PL/SQL

    Hi to everybody
    i am loading the legacy data to oracle apps table through sqlloader,
    now i want to know how many data record in my legacy file,
    we get it through log file of Sqlloader,
    now my question is how to call the log file through PL/sql script
    Please solved my question

    You can define an external table on it, and read it with Sql commands.
    See External Table Example

  • DB6 - Log files accidentally deleted - How to recreate them?

    Hello community,
    we found that all DB2 log files of a Solution Manager were manually deleted. The instance was down for some time, so we did not realize the problem immediately, but only now on the restart
    We are able to start DB2, but SAP can't connect and this error is reported in the log from the R3trans -d
    An I/O error occurred while accessing the  database  STATE=58030
    In the db2diag there are a lot of errors related to missing log file:
    RETCODE : ZRC=0x860F000A=-2045837302=SQLO_FNEX "File not found."
    DIA8411C A file "S0000016.LOG" could not be found.
    There is a procedure to recreate the DB2 log files?
    Best Regards,
    Valerio

    Hi Valerio,
    Without the log files the db does not know what transactions are open and hwat state it is in.
    The logs cannot be just recreated.
    Do you have any possibility to recover the logs? Could they have been moved to another location or were they completey deleted? Do you have a log mirror path set?
    If deleted then you could either restore from your last good backup and restore to the last point where you have logs or you will need to open a message to ask assistance in solving this.
    Regards,
    Paul

  • Can we put log file into comman storage

    Hi all,
    IN RAC mode database we put our archive log files into comman storage either it may be SAN or NAS,but my question is that can we put log files to in common storage, if not why is it so.thanks a lot in advance.

    you can use common storage or local disks to store Archive Log Files
    But Oracle need archive log locations to share on every nodes to helpful about backup + recovery from archivelog files.
    In a RAC environment, each instance maintains its own set of archived redo logs. These may either be located on local file systems on each node or in a shared file system that can be accessed by all nodes. While space considerations may dictate that archived redo logs be stored on local file systems, Recommend that you attempt to locate these files on a shared file system. That is because it may be necessary to restore these files and a remote node may be performing the restore.
    If your architecture or budget forces you to use local file systems, Recommend that you employ a completely symmetrical configuration and ensure that all nodes can access each archived redo log location over NFS or a similar protocol.Edited by: Surachart Opun (HunterX) on Jul 11, 2009 8:15 PM

  • Odd log files appearing in Bridge after editing in CS5

    During the past week, I've noticed that on two occasions three text files have appeared in the folder that I'm loading/saving files from whilst working in Photoshop. In both cases I've just deleted the files, but it's left me wondering what might have caused them to be generated in the first place.
    The files seem to be (network?) log files: I get RECV.txt, SENT.txt, and TEST.txt. I'm pretty sure that the first time they appeared, I was working in ACR, and after closing some files having carried out a few edits I spotted the files alongside the raw (DNG) files I'd been working on. The same three files appeared again tonight, this time whilst working on a couple of PSDs (although I did briefly go into ACR also).
    I wonder if perhaps they're being generated by CS5 trying to "call home" as part of the authentification system Adobe have in place? This is something of a shot in the dark though!
    I have all the latest updates installed for Photoshop and am running Windows 7 64-bit. The system is fully patched and well maintained; so far as I can tell nothing malicious is present. I've been happily using Photoshop for at least a couple of years now and don't recall having ever seen this before. Although I deleted the text files, I've not emptied out my recycle bin, so I guess I could copy & paste their contents if anyone feels this would be worthwhile.
    Has anyone else encountered this oddity before?
    M

    Hello Noel, thanks for replying.
    I've checked through the .txt files and I believe that I've identifed the source of these logs: I recently purchased the PhotoKit Sharpener 2 plug-in, and this uses an activation system to authenticate that the user licence is genuine and complying with the licence terms (same as PS - you can have it on two seperate machines if it's not being used on both at the same time, which is fair enough I think). The .txt files clearly seem to indicate that they have been generated as a result of this procedure. However, that fact that logs are being created leads me to conclude that perhaps the activation system has failed for some reason... I've checked on the PhotoKit support page and it does have a message about a known issue where some activations fail - they're apparently currently looking into this (it seems to be quite a rare problem, just my luck if I am affected!). In fact it seems to imply that the product could even go back to demo mode, which I rather hope won't happen to me having spent around £80 on this plug-in, which I'm finding to be excellent so far.
    I've just sent a message to PK support, and hope that they'll get back to me soon. I can update this thread if you'd like to know what they have to say on the matter?
    Perhaps Geoff Schewe will also respond (as a key member of the PK team), but I don't think he frequents this particular forum on a regular basis.
    M

  • Cannot conect to Job server after redirecting log files

    Platform: Windows
    Software: Data Services 3.0 (Data Integrator 12.0.0)
    Issue: When redirecting the Log files to go to another drive on the windows Server, we are unable to reconnect to the Job Server with Designer.
    We did the following to re-direct the Log files:
    In the DSConfig.txt file on the machine where the jobserver runs, they will find a section similar to the following:
    [AL_JobServer]
    AL_JobServerPath=C:\PROGRA1\BUSINE1\DATAIN~1.7\bin\al_jobserver.exe
    AL_JobServerLoadBalanceDebug=FALSE
    AL_JobServerLoadOSPolling=60
    AL_JobServerSendNotificationTimeout=60
    AL_JobServerLoadRBThreshold=10
    AL_JobServerLoadAlwaysRoundRobin=FALSE
    AL_JobServerAdditionalJobCommandLine=
    ServiceDisplayName=Data Integrator Service
    AL_JobServerName1=WestJS
    AL_JobServerPort1=3500
    AL_JobServerRepoPrefix1=
    AL_JobServerLogDirectory1=
    AL_JobServerBrokerPort1=
    AL_JobServerAdapterManager1=
    AL_JobServerEnableSNMP1=
    AL_JobServerName2=EastJS
    AL_JobServerPort2=3501
    AL_JobServerRepoPrefix2=
    AL_JobServerLogDirectory2=
    AL_JobServerBrokerPort2=
    AL_JobServerAdapterManager2=
    AL_JobServerEnableSNMP2=
    (As with any DSConfig.txt edits, I always tell customers to be very careful, to only follow instructions they receive from customer assurance, and to make backups of the prior file in case they need to revert.)
    Note that there is a u201CWestu201D jobserver defined with the 1 suffix, and an u201CEastu201D jobserver defined with the 2 suffix.  Both jobservers will by default log into %LINK_DIR%\log\ into a directory named after their jobserver name.  If the customer wants to redirect the logs for the u201CWestu201D jobserver to a different path, add the following to this section:
    AL_JobServerLogReDir1=D:\dierrorlogs\Log
    Note the use of the suffix 1 u2013 it means that it will be a part of the settings for the u201CWestu201D jobserver.  The u201CEastu201D jobserver will still log to %LINK_DIR%\log\EastJS while the u201CWestu201D jobserver will log to D:\dierrorlogs\Log.
    All we did was add the one single line AL_JobServerLogReDir1="E:\Program Files\BusinessObjects\dierrorlogs\Log"
    Can anyone give us some sort of suggestion or idea as to why we lost our connection to our Job Server with this change?  We DID restart the service.
    Thanks!  Ken

    I am not sure if this Parameter actually works ? I have to check that
    can you see the JobServer process in the task manager ? check the windows event log for any errors for the JobServe
    since you modified the path, I don't think it will be generating any log file for Job server
    what's the status for the job server in Desginer and Management console ?

Maybe you are looking for

  • Abap+java stack, users not mapping to portal role.

    We have the ABAP+java add-on install. The UME is by default ABAP engine. From Portal: 1 I create a portal user, it ALWAYS creates ABAP user in ABAP stack of WAS. 2. I create a portal role, it creates a role in the Portal. 3. When I assign the user th

  • Startup display flashes Apple logo, circle w/ line ,question mark folder

    I attempted to install OS 10.6 SL and during initial install and restart my MBP display started flashing the grey Apple logo, circle with angle line thru it, and file folder icon with question mark in center. I can not get my MBP to restart. It just

  • Software Update Problem and Apps Failing to Launch

    Ok, so, I have done a lot of hunting around to resolve this problem, (possibly problems) and tried some of the few fixes I have found, (though all were not exactly describing my problem). Said fixes were mentioned on here and on other sites/forums. A

  • CONS-10049: Consolidator Exception: ORA-01562: failed to extend rollback

    CONS-10049: Consolidator Exception: ORA-01562: failed to extend rollback segment number 5 ORA-01650: unable to extend rollback segment R04 by 256 in tablespace RBS How does one calculate the size of the rollback segment so as to avoid the above error

  • What is delete Source system assignment???

    Hello All, If we go inside Infosource in Source system and then right click there. One option there is " Delete Source System assignment" . If we chosse this option then what will happen? Can i reassign infosource  to new souce system after chosing t