Cannot conect to Job server after redirecting log files

Platform: Windows
Software: Data Services 3.0 (Data Integrator 12.0.0)
Issue: When redirecting the Log files to go to another drive on the windows Server, we are unable to reconnect to the Job Server with Designer.
We did the following to re-direct the Log files:
In the DSConfig.txt file on the machine where the jobserver runs, they will find a section similar to the following:
[AL_JobServer]
AL_JobServerPath=C:\PROGRA1\BUSINE1\DATAIN~1.7\bin\al_jobserver.exe
AL_JobServerLoadBalanceDebug=FALSE
AL_JobServerLoadOSPolling=60
AL_JobServerSendNotificationTimeout=60
AL_JobServerLoadRBThreshold=10
AL_JobServerLoadAlwaysRoundRobin=FALSE
AL_JobServerAdditionalJobCommandLine=
ServiceDisplayName=Data Integrator Service
AL_JobServerName1=WestJS
AL_JobServerPort1=3500
AL_JobServerRepoPrefix1=
AL_JobServerLogDirectory1=
AL_JobServerBrokerPort1=
AL_JobServerAdapterManager1=
AL_JobServerEnableSNMP1=
AL_JobServerName2=EastJS
AL_JobServerPort2=3501
AL_JobServerRepoPrefix2=
AL_JobServerLogDirectory2=
AL_JobServerBrokerPort2=
AL_JobServerAdapterManager2=
AL_JobServerEnableSNMP2=
(As with any DSConfig.txt edits, I always tell customers to be very careful, to only follow instructions they receive from customer assurance, and to make backups of the prior file in case they need to revert.)
Note that there is a u201CWestu201D jobserver defined with the 1 suffix, and an u201CEastu201D jobserver defined with the 2 suffix.  Both jobservers will by default log into %LINK_DIR%\log\ into a directory named after their jobserver name.  If the customer wants to redirect the logs for the u201CWestu201D jobserver to a different path, add the following to this section:
AL_JobServerLogReDir1=D:\dierrorlogs\Log
Note the use of the suffix 1 u2013 it means that it will be a part of the settings for the u201CWestu201D jobserver.  The u201CEastu201D jobserver will still log to %LINK_DIR%\log\EastJS while the u201CWestu201D jobserver will log to D:\dierrorlogs\Log.
All we did was add the one single line AL_JobServerLogReDir1="E:\Program Files\BusinessObjects\dierrorlogs\Log"
Can anyone give us some sort of suggestion or idea as to why we lost our connection to our Job Server with this change?  We DID restart the service.
Thanks!  Ken

I am not sure if this Parameter actually works ? I have to check that
can you see the JobServer process in the task manager ? check the windows event log for any errors for the JobServe
since you modified the path, I don't think it will be generating any log file for Job server
what's the status for the job server in Desginer and Management console ?

Similar Messages

  • Job number from alert log file to information

    Hello!
    I have a question about job numbers in alert log file. Today one of our Oracle 10g R2 [10.2.0.4] RAC nodes crashed. After examining alert log file for one of the nodes I saw a lot of messages like:
    Tue Jul 26 11:52:43 2011
    Errors in file /u01/app/oracle/admin/zeme/bdump/zeme2_j002_28952.trc:
    ORA-12012: error on auto execute of job *20627358*
    ORA-12705: Cannot access NLS data files or invalid environment specified
    Tue Jul 26 11:52:43 2011
    Errors in file /u01/app/oracle/admin/zeme/bdump/zeme2_j001_11018.trc:
    ORA-12012: error on auto execute of job *20627357*
    ORA-12705: Cannot access NLS data files or invalid environment specified
    Tue Jul 26 11:52:43 2011
    Errors in file /u01/app/oracle/admin/zeme/bdump/zeme2_j000_9684.trc:
    ORA-12012: error on auto execute of job *20627342*
    ORA-12705: Cannot access NLS data files or invalid environment specified
    After examining trc files I have found no further information about error except session ids.
    My question is: how to find what job caused these messages to appear in alert log file.
    How do I map number in alert log file to some "real" information (owner, statement executed, schedule)?
    Marx.

    Sorry for the delay
    Try this to find the job :
    select job, what from dba_jobs ;
    How do I find NLS_LANG version?SQL> show parameter NLS_LANG
    Do you mean ALTER SESSION inside a job?I meant anywhere, but your question is better.
    ORA-12705 - Common Reasons and How to Resolve Them [ID 158654.1]
    If OS is Windows lookout for NLS_LANG=NA in the registry
    Is it possible you are doing this somewhere ?
    ALTER SESSION SET NLS_DATE_FORMAT = 'RRRR-MM-DD\"T\"HH24:MI:SS';NLS database settings are superseded by NLS instance settings
    SELECT * from NLS_SESSION_PARAMETERS;
    These are the settings used for the current SQL session.
    NLS_LANG could be set in a profile for example.
    NLS_LANG=_AMERICA.WE8ISO8859P1     ( correct )
    NLS_LANG=AMERICA.WE8ISO8859P1 ( Incorrect )
    you need to set the "_" as separator.
    Windows
    set NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
    Unix
    export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
    mseberg
    Edited by: mseberg on Jul 28, 2011 3:51 PM
    Edited by: mseberg on Jul 29, 2011 4:05 AM

  • BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED

    Hello,
    To generate the log report, /VIRSA/ZVFATBAK program is scheduled on hourly basis but some time report doesn't get generated and if we see the background job then it shows sucessfully finished.
    If we see the maually the log report for FFID then below error message is displayed.
    " BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED"
    Can anyone guide me to solve the issue.
    Thanks in advance.
    Best Regards,
    Prashant Dubey

    Hi,
    once chk the status of the job by selecting that and check job status(cltr+shift_f12)
    since it was periodically scheduled job there will be a RELEASED job after every active job..
    so try to copy that into another job using copy option and give some new name which u have to remember...
    the moment u copy u can find the same copied job in SCHEDULED status...
    from here, try to run it again on hourly basis....
    After copying the job u can unschedule the old released job to scheduled otherwise 2 will run at a time...
    rgds,

  • Disappearing Tables in Terminal Server after reboot/log off

    After installing Crystal Report in Windows Server 2003 Terminal Server, I can stand in front of the server (as administrator) AND/OR Terminal into the machine as administrator and create reports.
    Once I log off my TS session (or reboot the machine), I can stand in front of the server (as administrator) and create reports BUT NOT Terminal into the machine as administrator and create reports.
    When I Terminal in, I select my ODBC source, log into it, then I go to add table via Data Explorer.  I see them get added, but once I close the Data explorer POOOF, the tables disappear.  I also get a "Dos error." when trying to open an already written report that uses the same (now buggy after reboot/log off) ODBC source.
    I have absolute full permissions to every single file on the server (and the data source server for that matter).  I'm thinking that the ODBC source (a MAS server) likes me for a while, especially after I do the install, but after I log back into the server via a Terminal Session, it dislikes me for some reason.  On a side note the connection tests good either way I go into the server.
    Please help.

    Hello,
    If you are using a MAS server then you may also be using a MAS ODBC Drvier? If so please contact Sage for support.
    As a quick test, try copying over the Xtreme MDB sample database and use Microsofts Access ODBC driver to see if it is a CR issue or ODBC driver issue.
    Sometimes a Repair install can fix these issues also.
    And you never said what version of Cr you are using and if any patches are installed?
    Thank you
    Don

  • SQL Server 2012 Reorg Index Job Blew up the Log File

    We have a maintenance plan that nightly (1) runs dbcc checkdb on all databases, (2) reorgs indexes on all databases, compacting large objects, (3) updates statistics, etc. There are three user databases, one large, one medium, one small. Usually it uses
    a little more than 80% of the medium database's log, set to 6,700 MB. Last night the reorg index step caused the log to increase to almost 14,000 MB and then blew up because the maximum file size was set to 14,000 MB, one of the alter index commands failed
    because it ran out of log space. (Dbcc checkdb step ran successfully.) Anyone have any idea what might cause this? There is one update process on this database, it runs at 3 AM. The maintenance plan runs at 9 PM and completes by 1 AM. The medium database has
    a 21,000 MB data file, reserved space is at about 10 GB. This is a SQL 2012 Standard SP 2 running on Windows 2012 Server Standard.

    I personally like to shrink the log files once the indexes have been rebuilt and before switching back to full recovery, because as I'm going to take a full backup afterwards, having a small log file reduces the size of the backup.
    Do you grow them afterwards, or do you let the application waste time on that during peak hours?
    I have not checked, but I see no reason why the backup size would depend on the size of the log file - it's the data in the data file you back up, not the log file.
    I would say this is highly dubious.
    Erland Sommarskog, SQL Server MVP, [email protected]
    Yeah I let the application allegedly "waste" a few milisseconds a day autogrowing the log file. Common, how long do you think it takes for a log file to grow a few GB on most storage systems nowadays? As long as you set an appropriate autogrow
    interval so your log file doesn't get too fragmented (full of VLFs), you'll be perfectly fine in most situations.
    Lets say you have a logical disk dedicated to log file storage, but it is shared across multiple databases within the instance. Having allocated space for the log files means there will be not much free space left in the disk in case ANY database needs more
    space than the others due to a peak in transactional workload, even though other databases have unused space that could have been used.
    What if this same disk, for some reason, is also used to store the tempdb log file? Then all applications will become unstable.
    These are the main reasons I don't recommend people blindly crucify keeping log files small when possible. I know there are many people who disagree and I'm aware of their reasons. Maybe we just had different experiences about this subject. Maybe people
    just haven't been through the nightmare of having a corrupted system database or a crashed instance because of insuficient log space in the middle of the day.
    And you are right about the size of the backup, I didn't put it correctly. It isn't the size of the backup that gets smaller (although the backup operation will run faster, having tested this myself), but the benefit from backing up a database with a small
    log file is that you won't need the extra space to restore it in a different environment such as a BI or DEV server, where recuperability doesn't matter and the database will be on simple recovery mode.
    Restoring the database will also be faster.
    Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
    is an undocumented behavior and should not be relied upon.

  • Data Services 4.0 Designer. Job Execution but empty log file no matter what

    Hi all,
    am running DS 4.0. When i execute my batch_job via designer, log window pops up but is blank. i.e. cannot see any trace messages.
    doesn't matter if i select "Print all trace messages" in execution properties.
    Jobserver is running on a seperate server. The only thing i have locally is just my designer.
    if i log into the Data Services management console and select the job server, i can see trace and error logs from the job. So i guess what i need is for this stuff to show up in my designer?
    Did i miss a step somewhere?
    can't find anything in docs about this.
    thanks
    Edited by: Andrew Wangsanata on May 10, 2011 11:35 AM
    Added additional detail

    awesome. Thanks Manoj
    I found the log file. in it relevant lines for last job i ran are
    (14.0) 05-11-11 16:52:27 (2272:2472) JobServer:  Starting job with command line -PLocaleUTF8 -Utip_coo_ds_admin
                                                    -P+04000000001A030100100000328DE1B2EE700DEF1C33B1277BEAF1FCECF6A9E9B1DA41488E99DA88A384001AA3A9A82E94D2D9BCD2E48FE2068E59414B12E
                                                    48A70A91BCB  -ek********  -G"70dd304a_4918_4d50_bf06_f372fdbd9bb3" -r1000 -T1073745950  -ncollect_cache_stats
                                                    -nCollectCacheSize  -ClusterLevelJOB  -Cmxxx -CaDesigner -Cjxxx -Cp3500 -CtBatch  -LocaleGV
                                                    -BOESxxx.xxx.xxx.xxx -BOEAsecLDAP -BOEUi804716
                                                    -BOEP+04000000001A0301001000003F488EB2F5A1CAB2F098F72D7ED1B05E6B7C81A482A469790953383DD1CDA2C151790E451EF8DBC5241633C1CE01864D93
                                                    72DDA4D16B46E4C6AD -Sxxx.xxx.xxx -NMicrosoft_SQL_Server -Qlocal_repo  coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e" -l"C:\Program Files (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/trace_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -z"C:\Program Files
                                                    (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/error_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -w"C:\Program Files
                                                    (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/monitor_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -Dt05_11_2011_16_52_27_9
                                                    (BODI-850052)
    (14.0) 05-11-11 16:52:27 (2272:2472) JobServer:  StartJob : Job '05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3' with pid '148' is kicked off
                                                    (BODI-850048)
    (14.0) 05-11-11 16:52:28 (2272:2072) JobServer:  Sending notification to <inet:10.165.218.xxx:56511> with message type <4> (BODI-850170)
    (14.0) 05-11-11 16:52:28 (2272:2472) JobServer:  AddChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
                                                    <inet:10.165.218.xxx:56511>. (BODI-850003)
    (14.0) 05-11-11 17:02:32 (2272:2472) JobServer:  RemoveChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
                                                    <inet:10.165.218.xxx:56511>. (BODI-850003)
    (14.0) 05-11-11 19:57:45 (2272:2468) JobServer:  GetRunningJobs() success. (BODI-850058)
    (14.0) 05-11-11 19:57:45 (2272:2468) JobServer:  PutLastJobs Success.  (BODI-850001)
    (14.0) 05-11-11 19:57:45 (2272:2072) JobServer:  Sending notification to <inet:10.165.218.xxx:56511> with message type <5> (BODI-850170)
    (14.0) 05-11-11 19:57:45 (2272:2472) JobServer:  GetHistoricalLogStatus()  Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
    (14.0) 05-11-11 19:57:45 (2272:2472) JobServer:  GetHistoricalLogStatus()  Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
    it does not look like i have any errors with respect to connectivity? ( or any errors at all....)
    Please advise on what, if anything you notice from log file and/or next steps i can take.
    thanks.

  • Microsoft sql server extended event log file

    Dears
    Sorry for my below questions if it is very beginner level.
    In my implementation I have cluster SQL 2012 on Windows 2012; I am using MountPoints since I have many Clustered Disks.
    My MountPoint Size is only 3 GB; My Extended event log are growing fast and it is storing in the MountPoint Drive directly (Path: F:\MSSQL11.MSSQLSERVER\MSSQL\Log).
    What is the best practice to work with it? (is it to keep all Extended events? or recirculate? or to shrink? or to store in DB?)
    Is there any relation between SQL truncate and limiting the size of Extended event logs?
    How can I recirculate this Extended Events?
    How can I change the default path?
    How can I stop it?
    and in case I stop it, does this means to stop storing SQL event in Windows event Viewer?
    Thank you

    After a lot of checking, I have found below:
    My Case:
    I am having SQL Failover Cluster Instances "FCI" and I am using Mount-Points to store my Instances.
    I am having 2 Passive Copies for each FCI.
    In my configuration I choose to store the Root Instance which include the logs on Mount-Point.
    My Mount Point is 2 GB Only, which became full after few days of deployment.
    Light Technical Information:
    The Extended Event Logs files are generated Coz I have FCI, in single SQL Installation you will not find this files.
    The File Maximum size will be 100 MB.
    The Files start circulating after it become 10 Full Files.
    If you have the FCI installed as 1 Active 2 Passive, and you are doing failover between the nodes, then you will expect to see around 14 - 30 copy of this file.
    Based on above information you will need to have around 100 MB * 10 Files Per Instance copy * 3 Since in my case I have 1 Active and 2 passive instances which will = 3000 MB
    So in my case My Mount-Point was 2 GB, which become full coz of this SQLDIAG Logs.
    Solution:
    I extended my mount point by 3 GB coz I am storing this logs on it.
    In case you will need to change SQLDIAG Extended Logs Size to 50 MB for example and place to F:\Logs, then you will need below commands:
    ALTER SERVER CONFIGURATION SET DIAGNOSTICS LOG OFF;
    ALTER SERVER CONFIGURATION
    SET DIAGNOSTICS LOG MAX_SIZE = 50 MB;
    ALTER SERVER CONFIGURATION
    SET DIAGNOSTICS LOG PATH = 'F:\logs';
    ALTER SERVER CONFIGURATION SET DIAGNOSTICS LOG ON;
    After that you will need to restart the FCI from SQL Server Configuration Manager or Failover Cluster Manager.
    I wish you will find this information helpful if it is your case.
    Regards

  • VB6 source code cannot connect to Oracle database after compile to file.exe

    Hi All,
    I have a problem about VB6 connect with Oracle database. It can connect as normal when run on VB program. After compiled to file.exe and execute, it cannot connect to Oracle database. What's going on ? Please advise? Thank you.
    Here is sample of my code connection.
    Option Explicit
    Private wsData As New ADODB.Connection
    wsData.ConnectionString = _
    "Provider=MSDAORA.1;User ID=lsp;Password=lsp2007;Data Source=prd01;Persist Security Info=False"
    wsData.Open
    End sub
    Rgads,
    Ats.

    Hi,
    I believe you're in the wrong forum, this forum is for Oracle Application Express.

  • Job console messages in log file

    Hi All,
    Is there a log file which logs the job console messages...where is it located ...we are on V11.1.2.,,....We have a issue with Job console validation......when a EPMA planning application is validated...it has some errors....but when we try to open the attachment to get more details...a new window opens and closes immediatly...
    how can we check the error messages?

    In IE try going to Internet Options > Security > Custom Level > Downloads > File Downloads > enable
    This should allow the text file to open when you try to open the attachment.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • SQL Server 2012 DB log file doesn't shrink (simple recovery model)

    I've found several similar questions in this forum, but none of the answers have resolved my problem: I have a SQL Server 2012 DB using simple recovery model.  The MDF file is 12 GB and the LDF file is 10 GB.  I'm trying to shrink the size of the
    LDF file.  I've read that for simple recovery model DBs there are reasons for delaying log file shrinking, but I still can't find a solution based on these reasons.
    When I try to shrink it using this command:
    DBCC SHRINKFILE(MyDB_log, 1000000)
    I get these results, and no change of file size:
    DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages8 2 1241328 128 1241328 128
    The same results running this:
    DBCC SHRINKFILE(MyDB_log, 1000000, TRUNCATEONLY)
    There doesn't appear to be any open transactions:
    DBCC OPENTRAN()No active open transactions.DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    And this returns NOTHING:
    SELECT name, database_id, log_reuse_wait_desc FROM sys.databases WHERE database_id = DB_ID()name database_id log_reuse_wait_descMyDB 8 NOTHING
    I've also tried running the following, but nothing useful is returned:
    SELECT * FROM sys.dm_tran_database_transactionsSELECT * FROM sys.dm_exec_requests WHERE database_id=DB_ID()SELECT * FROM sys.dm_tran_locks WHERE resource_database_id=DB_ID()
    Any other suggestions of what I can do to shrink this log file?  Or perhaps someone can justify its enormous size?
    David Collacott

    The answer is pretty simple.
    The following code is the problem:
    DBCC SHRINKFILE(MyDB_log, 1000000)
    You are telling SQL Server that you want to "shrink" the MyDB_log file and the target size is 1TB.  Well according to you the MyDB_Log file is well below the 1TB size you are targeting, in fact it's only 10GB so SQL Server is doing precisely what you
    are telling it to do.
    See, according to the SQL Server documentation
    here, target size "Is the size for the file in megabytes, expressed as an integer."
    Now if you'd like to actually shrink the log file down to, oh say 1GB, then you should try the following command:
    DBCC SHRINKFILE(MyDB_log, 1000)
    The theory being 1000 * 1048576 (i.e. 1MB) is equal to 1GB.

  • Why is there no error when checkpointing after db log files are removed?

    I would like to test a scenario when an application's embedded database is corrupted somehow. The simplest test I could think of was removing the database log files while the application is running. However, I can't seem to get any failure. To demonstrate, below is a code snippet that demonstrates what I am trying to do. (I am using JE 3.3.75 on Mac OS 10.5.6):
    public class FileRemovalTest {
    public static void main(String[] args) throws Exception
    // Setup the DB Environment
    EnvironmentConfig ec = new EnvironmentConfig();
    ec.setAllowCreate(true);
    ec.setTransactional(true);
    ec.setConfigParam(EnvironmentConfig.ENV_RUN_CLEANER, "false");
    ec.setConfigParam(EnvironmentConfig.ENV_RUN_CHECKPOINTER, "false");
    ec.setConfigParam(EnvironmentConfig.CLEANER_EXPUNGE, "true");
    ec.setConfigParam("java.util.logging.FileHandler.on", "true");
    ec.setConfigParam("java.util.logging.level", "FINEST");
    Environment env = new Environment(new File("."), ec);
    // Create a database
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(true);
    dbConfig.setTransactional(true);
    Database db = env.openDatabase(null, "test", dbConfig);
    // Insert an entry and checkpoint the database
    db.put(
    null,
    new DatabaseEntry("key".getBytes()),
    new DatabaseEntry("value".getBytes()));
    CheckpointConfig checkpointConfig = new CheckpointConfig();
    checkpointConfig.setForce(true);
    env.checkpoint(checkpointConfig);
    // Delete the DB log files
    File[] dbFiles = new File(".").listFiles(new DbFilenameFilter());
    if (dbFiles != null)
    for (File file : dbFiles)
    file.delete();
    // Add another entry and checkpoint the database again.
    db.put(
    null,
    new DatabaseEntry("key2".getBytes()),
    new DatabaseEntry("value2".getBytes())
    {color:#ff0000} *// Q: Why does this 'put' succeed?*
    {color}
    env.checkpoint(checkpointConfig);
    {color:#ff0000}*// Q: Why does this checkpoint succeed?*{color}
    // Close the database and the environment
    db.close();
    env.close();
    private static class DbFilenameFilter implements FilenameFilter
    public boolean accept(File dir, String name) {
    return name.endsWith(".jdb");
    This is what I see in the logs:
    2009-03-05 12:53:30:631:CST CONFIG Recovery w/no files.
    2009-03-05 12:53:30:677:CST FINER Ins: bin=2 ln=1 lnLsn=0x0/0xe9 index=0
    2009-03-05 12:53:30:678:CST FINER Ins: bin=5 ln=4 lnLsn=0x0/0x193 index=0
    2009-03-05 12:53:30:688:CST FINE Commit:id = 1 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:690:CST FINEST size interval=0 lastCkpt=0x0/0x0 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:703:CST FINER Ins: bin=8 ln=7 lnLsn=0x0/0x48b index=0
    2009-03-05 12:53:30:704:CST CONFIG Checkpoint 1: source=recovery success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:705:CST CONFIG Recovery finished: Recovery Infonull&gt; useMinReplicatedNodeId=0 useMaxNodeId=0 useMinReplicatedDbId=0 useMaxDbId=0 useMinReplicatedTxnId=0 useMaxTxnId=0 numMapINs=0 numOtherINs=0 numBinDeltas=0 numDuplicateINs=0 lnFound=0 lnNotFound=0 lnInserted=0 lnReplaced=0 nRepeatIteratorReads=0
    2009-03-05 12:53:30:709:CST FINEST Environment.open: name=test dbConfig=allowCreate=true
    exclusiveCreate=false
    transactional=true
    readOnly=false
    duplicatesAllowed=false
    deferredWrite=false
    temporary=false
    keyPrefixingEnabled=false
    2009-03-05 12:53:30:713:CST FINER Ins: bin=2 ln=10 lnLsn=0x0/0x7be index=1
    2009-03-05 12:53:30:714:CST FINER Ins: bin=5 ln=11 lnLsn=0x0/0x820 index=1
    2009-03-05 12:53:30:718:CST FINE Commit:id = 2 numWriteLocks=0 numReadLocks = 0
    2009-03-05 12:53:30:722:CST FINEST Database.put key=107 101 121 data=118 97 108 117 101
    2009-03-05 12:53:30:728:CST FINER Ins: bin=13 ln=12 lnLsn=0x0/0x973 index=0
    2009-03-05 12:53:30:729:CST FINE Commit:id = 3 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:729:CST FINEST size interval=0 lastCkpt=0x0/0x581 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:735:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0x193 newLnLsn=0x0/0xb61
    2009-03-05 12:53:30:736:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0x820 newLnLsn=0x0/0xc3a
    2009-03-05 12:53:30:737:CST FINER Ins: bin=8 ln=15 lnLsn=0x0/0xd38 index=0
    2009-03-05 12:53:30:738:CST CONFIG Checkpoint 2: source=api success=true nFullINFlushThisRun=6 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:741:CST FINEST Database.put key=107 101 121 50 data=118 97 108 117 101 50
    2009-03-05 12:53:30:742:CST FINER Ins: bin=13 ln=16 lnLsn=0x0/0xeaf index=1
    2009-03-05 12:53:30:743:CST FINE Commit:id = 4 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:744:CST FINEST size interval=0 lastCkpt=0x0/0xe32 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:746:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0xb61 newLnLsn=0x0/0x1166
    2009-03-05 12:53:30:747:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0xc3a newLnLsn=0x0/0x11e9
    2009-03-05 12:53:30:748:CST FINER Ins: bin=8 ln=17 lnLsn=0x0/0x126c index=0
    2009-03-05 12:53:30:748:CST CONFIG Checkpoint 3: source=api success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:750:CST FINEST Database.close: name=test
    2009-03-05 12:53:30:751:CST FINE Close of environment . started
    2009-03-05 12:53:30:751:CST FINEST size interval=0 lastCkpt=0x0/0x1363 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:754:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0x1166 newLnLsn=0x0/0x14f8
    2009-03-05 12:53:30:755:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0x11e9 newLnLsn=0x0/0x15a9
    2009-03-05 12:53:30:756:CST FINER Ins: bin=8 ln=18 lnLsn=0x0/0x16ab index=0
    2009-03-05 12:53:30:757:CST CONFIG Checkpoint 4: source=close success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:757:CST FINE About to shutdown daemons for Env .

    Hi,
    OS X, being Unix-like, probably isn't actually deleting file 00000000.jdb since JE still has it open -- the file deletion is deferred until it is closed. JE keeps N files open, where N is configurable.
    We do corruption testing ourselves, in the following test by overwriting a file and then attempting to read back the entire database:
    test/com/sleepycat/je/util/DbScavengerTest.java
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Flash media server 4.5 log files filed x-duration value always 0

    Hi guys,
    When i read flash media stats log file then i found x-duration value 0 only in case for x-event unplublish this filed have a value.
    My question is x-duration filed value is the number of seconds the client has been connected. then why i am getting 0 in case of publish and publish-continue only get the value in case of unpublish.
    Any one explain ? how can i get the connection time in case of publish and un publish
    Thanks
    Brijesh

    For a stream the x-duration is the number of seconds the stream has played. Hence at publish and record events this value is 0 and is non-zero for unpublish event.
    In case of session, this field contains the number of seconds the client has been connected. This will be 0 for connect event and non-zero for disconnect event.
    The time an event occurred is reflected under the time filed in access log.
    Regards,
    Apurva

  • Cannot connect to iCloud server after 10.9.4 update.

    Attaching EtreCheck information.  Connection doctor states: Could not connect to iCloud iMap server.  Restart of mail did not work. Reboot of computer did not work.
    Thank you for any help you can provide.
    EtreCheck version: 1.9.12 (48)
    Report generated July 19, 2014 at 10:06:48 AM MST
    Hardware Information:
      iMac (27-inch, Late 2013) (Verified)
      iMac - model: iMac14,2
      1 3.5 GHz Intel Core i7 CPU: 4 cores
      32 GB RAM
    Video Information:
      NVIDIA GeForce GTX 780M - VRAM: 4096 MB
      iMac 2560 x 1440
    System Software:
      OS X 10.9.4 (13E28) - Uptime: 0 days 0:16:41
    Disk Information:
      APPLE SSD SD0128F disk0 : (121.33 GB)
      EFI (disk0s1) <not mounted>: 209.7 MB
      disk0s2 (disk0s2) <not mounted>: 120.99 GB
      Boot OS X (disk0s3) <not mounted>: 134.2 MB
      APPLE HDD ST3000DM001 disk1 : (3 TB)
      EFI (disk1s1) <not mounted>: 209.7 MB
      disk1s2 (disk1s2) <not mounted>: 3 TB
      Recovery HD (disk1s3) <not mounted>: 650 MB
    USB Information:
      Apple Inc. BRCM20702 Hub
      Apple Inc. Bluetooth USB Host Controller
      Apple Inc. FaceTime HD Camera (Built-in)
      Apple, Inc. Keyboard Hub
      Apple Inc. Apple Keyboard
      HP Photosmart 6520 series
    Thunderbolt Information:
      Apple Inc. thunderbolt_bus
    Gatekeeper:
      Mac App Store and identified developers
    Kernel Extensions:
      [not loaded] com.philips.iokit.DLconnect (85 - SDK 10.4) Support
      [not loaded] com.roxio.TDIXController (2.0) Support
    Launch Daemons:
      [loaded] com.adobe.fpsaud.plist Support
      [loaded] com.adobe.SwitchBoard.plist Support
      [loaded] com.microsoft.office.licensing.helper.plist Support
      [loaded] com.oracle.java.Helper-Tool.plist Support
      [loaded] com.oracle.java.JavaUpdateHelper.plist Support
      [running] com.parallels.mobile.dispatcher.launchdaemon.plist Support
      [loaded] com.parallels.mobile.kextloader.launchdaemon.plist Support
    Launch Agents:
      [not loaded] com.adobe.AAM.Updater-1.0.plist Support
      [running] com.fujitsu.pfu.ScanSnap.AOUMonitor.plist Support
      [running] com.newwellnesssolutions.DLconnectMonitor.plist Support
      [loaded] com.oracle.java.Java-Updater.plist Support
      [loaded] com.parallels.mobile.prl_deskctl_agent.launchagent.plist Support
    User Launch Agents:
      [loaded] com.adobe.AAM.Updater-1.0.plist Support
      [loaded] com.adobe.ARM.[...].plist Support
      [loaded] com.parallels.mobile.startgui.launchagent.plist Support
      [running] jp.co.pfu.ScanSnap.SearchablePDFConverter.plist Support
    User Login Items:
      iTunesHelper
      Dropbox
      ScanSnap Manager
    Internet Plug-ins:
      Flip4Mac WMV Plugin: Version: 3.2.0.16   - SDK 10.8 Support
      FlashPlayer-10.6: Version: 14.0.0.145 - SDK 10.6 Support
      Default Browser: Version: 537 - SDK 10.9
      AdobePDFViewerNPAPI: Version: 10.1.10 Support
      AdobePDFViewer: Version: 10.1.10 Support
      Flash Player: Version: 14.0.0.145 - SDK 10.6 Support
      QuickTime Plugin: Version: 7.7.3
      SharePointBrowserPlugin: Version: 14.4.2 - SDK 10.6 Support
      JavaAppletPlugin: Version: Java 7 Update 60 Check version
    Safari Extensions:
      Save to Pocket: Version: 1.9.1
      Open in Internet Explorer: Version: 1.0
    Audio Plug-ins:
      BluetoothAudioPlugIn: Version: 1.0 - SDK 10.9
      AirPlay: Version: 2.0 - SDK 10.9
      AppleAVBAudio: Version: 203.2 - SDK 10.9
      iSightAudio: Version: 7.7.3 - SDK 10.9
    iTunes Plug-ins:
      Quartz Composer Visualizer: Version: 1.4 - SDK 10.9
    3rd Party Preference Panes:
      Flash Player  Support
      Flip4Mac WMV  Support
      Java  Support
    Time Machine:
      Auto backup: YES
      Volumes being backed up:
      Destinations:
      Data [Network] (Last used)
      Total size: 3 
      Total number of backups: 55
      Oldest backup: 2014-01-26 04:43:07 +0000
      Last backup: 2014-07-19 15:22:17 +0000
      Size of backup disk: Excellent
      Backup size 3  > (Disk size 0 B X 3)
      Time Machine details may not be accurate.
      All volumes being backed up may not be listed.
    Top Processes by CPU:
          1% WindowServer
          1% fontd
          1% SearchablePDFConverterOCR
          0% dpd
    Top Processes by Memory:
      295 MB Mail
      164 MB mds_stores
      164 MB CalendarAgent
      131 MB com.apple.IconServicesAgent
      131 MB ScanSnap Manager
    Virtual Memory Information:
      27.47 GB Free RAM
      2.37 GB Active RAM
      433 MB Inactive RAM
      1.74 GB Wired RAM
      419 MB Page-ins
      0 B Page-outs

    Restart the router and the broadband device, if they're separate. If there's no change, see below.
    Please read this whole message before doing anything.
    This procedure is a diagnostic test. It’s unlikely to solve your problem. Don’t be disappointed when you find that nothing has changed after you complete it.
    The purpose of the test is to determine whether the problem is caused by third-party software that loads automatically at startup or login, by a peripheral device, by a font conflict, or by corruption of the file system or of certain system caches.
    Disconnect all wired peripherals except those needed for the test, and remove all aftermarket expansion cards, if applicable. Start up in safe mode and log in to the account with the problem. You must hold down the shift key twice: once when you turn on the computer, and again when you log in.
    Note: If FileVault is enabled, or if a firmware password is set, or if the startup volume is a Fusion Drive or a software RAID, you can’t do this. Ask for further instructions.
    Safe mode is much slower to start up and run than normal, with limited graphics performance, and some things won’t work at all, including sound output and Wi-Fi on certain models. The next normal startup may also be somewhat slow.
    The login screen appears even if you usually login automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin.
    Test while in safe mode. Same problem?
    After testing, restart as usual (not in safe mode) and verify that you still have the problem. Post the results of the test.

  • Cannot start Mac mini server after changing permissions

    I have a two-day-old Mac mini (server version) and all was OK until I decided to change the permissions on both hard drives (and all enclosed folders and files) to give read/write access to administrators. I received quite a few error messages about invalid kernel file extensions (I think) which I elected to ignore or override. Then when trying to log off, the machine hung.
    Safe reboot does not work. I have rebooted using <Command> S and get the following amongst the many messages:
    Bug: launchctl.c:3557 (23930):17: ioctl(s6, SIOCAIFADDR_IN6, &ifra6) != -1
    then at the end of display, I get the following as the last 4 lines:
    launchctl: Dubious permissions on file (skipping): /Library/LaunchDaemons
    launchctl: Dubious permissions on file (skipping): /System/Library/LaunchDaemons
    launchctl: Dubious permissions on file (skipping): /etc/mach_init.d
    AppleIntelCPUPowerManagement: initialization complete
    There are no peripherals and I cannot try re-installing Snow Leopard for two reasons. First, I have no disk. More importantly, there is no CD drive - this is a Mac mini.
    Please help!

    HHMNY wrote:
    I have a two-day-old Mac mini (server version) and all was OK until I decided to change the permissions on both hard drives (and all enclosed folders and files) to give read/write access to administrators.
    doing this completed scrambled your system. you now have no choice but to reinstall Snow Leopard.
    and in the future NEVER use "apply to enclosed items on ANY system created folders. and certainly NEVER EVER use it on the whole system drive.

  • Cannot connect to imaging server after ZDML SP1 IR3 install

    OK, here's a bit of an odd problem... well, actually, I hope it's an insanely easy problem that I just have overlooked a simple solution for. I guess that's for you guys to help figure out. ;)
    We had been running Zenworks 7.0.1.0 on an OES1 Linux server with a few odd problems, mostly with remote management and inventory, but we never had a problem with connecting to it for imaging purposes (at least not since I took over management of the box not that long ago). Considering the remote management and inventory issues, I decided to update it to the latest version just to make sure it wasn't an issue that had already been fixed, but now that I have done that we can no longer connect to the imaging server for imaging. While I'd still like to get our original problems fixed, I'm more concerned about the imaging itself and can then revisit the first.
    How I went about the upgrade: I followed the instructions at section 4.3 at ZENworks 7 Desktop Management with Support Pack 1 Interim Release 3a. Running ZDMstart shows "done" for bringing everything up, but when I try "/etc/init.d/novell-zmgserv status" it tells me that the Daemon is stopped; when I try "/etc/init.d/novell-zmgserv start" it says "failed".
    When that failed, I downloaded the ZEN7_with_SP1_IR2_DesktopMgmtLinux.iso and ran that installation, then went through the steps on 4.2 to upgrade. (OK, this particular maneuver may not have been my smartest.) Same end results though.
    I do recognize that I'm walking into a situation I didn't set up and there were other problems to start with. I have a feeling by the end of the year we'll be migrating to an OES2 server but, to be blunt, that project has been put off for other reasons here and again and I'd like to have something to work with in the meantime. Any idea where exactly the failed attempts would be logged to give me more to work with (I don't see anything in /var/log/messages)? Any other thoughts/suggestions are more than welcome; thanks in advance!
    topher

    Apologies in advance for any confusion; normally I post under this account, but the other day was logged in as tchristi2 for other reasons and forgot to log out... :/ In any case....
    I tried uninstalling and re-installing ZENworks from the 7.1HP2 CD, and tailing the /var/log/messages file this line jumped out at me:
    Sep 2 09:07:19 oes1 logger: We were unable to locate ndsmodules.conf at /usr/lib or at /usr/lib/nds-modules. ZENworks Imaging was not installed correctly.
    I cannot find ndsmodules.conf in either of those locations; could someone help me figure out where it would be or why I might not have it? I may just punt and setup an OES2 box and move ZENworks to it, but at the same time I'd like to have imaging running until we do get that box up and going. Thanks in advance!
    topher

Maybe you are looking for

  • Popup window  opens in the second page  - plz help gurus.

    Hi All, I'm calling a transaction through ITS. There is a hyperlink in a report brings up PDF in the popup window. Popup window comes up in the second page of the report after closing in the first page of the report. Popup comes in all the consecutiv

  • After using Firefox/Android for a while, it just shows a black screen.

    Asus TF701 tablet (with keyboard dock) After navigating through several sites, Ffox-Android (v.34) displays only a black screen. When I look at the tab thumbnails, however, the sites seem to be rendered. But each time I select a thumbnail. the tab's

  • Usng Acrobat SDK to create form fields on an existing pdf

    I am trying to implement a browser based editor for my company's application. This requires to fetch an already created pdf document, add form fields on it and display on a browser. Does it look feasible? Does Acrobat SDK allow to create a fillable p

  • How does the ContentSelectorAdviceRequest.setMax method work?

    We are currently using WLPS 3.2. The API documentation states that (http://edocs.bea.com/wlcs/docs32/javadoc/wlps/): “Sets the maximum number of content items to return from a content query request.” OK? What does this really mean? If we use the setM

  • Lightroom 4 is blank and won't open catalog

    Hello, I am a new in this comunnity. I have been using Lightroom 4 for the past  year on my Mac but I did not use it for the past month. Yesterday when I tried to open it again, it opens but it is blank, no thumbnails or anything.It does not allow me