Discoverer 3i server trace yields empty log file
Has anyone had issues with setting up a Discoverer 3i trace on the server? Our viewer shows version 3.3.62.02. I have tried Registry entries under 'HKEY_CURRENT_USER\Software\Oracle\Discoverer 3.1' (per the documentation) and also in 'Discoverer 3.3', in case it thought that was the version. I have varied file names and parameters, stopping and restarting the Discoverer server service each time (and sometimes rebooting, just to be sure).
Each time I enter Viewer, it creates a new 'Discoverer.log' file in 'Winnt/System32' of zero bytes, whether or not I indicate that is supposed to be the file name, but never writes anything into it, whether the workbook works correctly or not. I have done this before on a workstation version with no problems.
Am I missing something on the server side?
Thanks,
Ron
Has anyone had issues with setting up a Discoverer 3i trace on the server? Our viewer shows version 3.3.62.02. I have tried Registry entries under 'HKEY_CURRENT_USER\Software\Oracle\Discoverer 3.1' (per the documentation) and also in 'Discoverer 3.3', in case it thought that was the version. I have varied file names and parameters, stopping and restarting the Discoverer server service each time (and sometimes rebooting, just to be sure).
Each time I enter Viewer, it creates a new 'Discoverer.log' file in 'Winnt/System32' of zero bytes, whether or not I indicate that is supposed to be the file name, but never writes anything into it, whether the workbook works correctly or not. I have done this before on a workstation version with no problems.
Am I missing something on the server side?
Thanks,
Ron
Similar Messages
-
Empty Log File - log settings will not save
Description of Problem or Question:
Cannot get logging to work in folder D:\Program Files\Business Objects\Dashboard and Analytics 12.0\server\log
(empty log file is created)
Product\Version\Service Pack\Fixpack (if applicable):
BO Enterorise 12.0
Relevant Environment Information (OS & version, java or .net & version, DB & version):
Server: windows Server 2003 Enterprise SP2.
Database Oracle 10g
Client : Vista
Sporadic or Consistent (if applicable):
Consistent
What has already been tried (where have you searched for a solution to your question/problem):
Searched forum, SMP
Steps to Reproduce (if applicable):
From InfoViewApp, logged in as Admin
Open ->Dashboard and Analytics Setp -> Parameters -> Trace
Check "Log to folder" and "SQL Queries", Click Apply.
Now, navigate away and return to this page - the "Log to folder" is unchecked. Empty log file is created.Send Apple feedback. They won't answer, but at least will know there is a problem. If enough people send feedback, it may get the problem solved sooner.
Feedback
Or you can use your Apple ID to register with this site and go the Apple BugReporter. Supposedly you will get an answer if you submit feedback.
Feedback via Apple Developer
Do a backup.
Quit the application.
Go to Finder and select your user/home folder. With that Finder window as the front window, either select Finder/View/Show View options or go command - J. When the View options opens, check ’Show Library Folder’. That should make your user library folder visible in your user/home folder. Select Library. Then go to Preferences/com.apple.systempreferences.plist. Move the .plist to your desktop.
Restart, open the application and test. If it works okay, delete the plist from the desktop.
If the application is the same, return the .plist to where you got it from, overwriting the newer one.
Thanks to leonie for some information contained in this. -
Getting empty log files with log4j and WebLogic 10.0
Hi!
I get empty log files with log4j 1.2.13 and WebLogic 10.0. If I don't run the application in the application server, then the logging works fine.
The properties file is located in a jar in the LIB folder of the deployed project. If I change the name of the log file name in the properties file, it just creates a new empty file.
What could be wrong?
Thanks!I assume that when you change the name of the expected log file in the properties file, the new empty file is that name, correct?
That means you're at least getting that properties file loaded by log4j, which is a good sign.
As the file ends up empty, it appears that no logger statements are being executed at a debug level high enough for the current debug level. Can you throw in a logger.error() call at a point you're certain is executed? -
Empty Log files not deleted by Cleaner
Hi,
we have a NoSql database installed on 3 nodes with a replication factor of 3 (see exact topology below).
We run a test which consisted in the following operations repeated in a loop : store a LOB, read it , delete it.
store.putLOB(key, new ByteArrayInputStream(source),Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
store.getLOB(key,Consistency.NONE_REQUIRED, 5, TimeUnit.SECONDS);
store.deleteLOB(key, Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
During the test the space occupied by the database continues to grow !!
Cleaner threads are running but logs these warnings:
2015-02-03 14:32:58.936 UTC WARNING [rg3-rn2] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
2015-02-03 14:32:58.937 UTC WARNING [rg3-rn2] JE: Cleaner has 12 files not deleted because they are protected by replication.
2015-02-03 14:32:58.920 UTC WARNING [rg3-rn1] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
2015-02-03 14:32:58.921 UTC WARNING [rg3-rn1] JE: Cleaner has 12 files not deleted because they are protected by replication.
2015-02-03 14:32:58.908 UTC WARNING [rg3-rn3] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
2015-02-03 14:32:58.909 UTC WARNING [rg3-rn3] JE: Cleaner has 12 files not deleted because they are protected by replication.
2015-02-03 14:33:31.704 UTC INFO [rg3-rn2] JE: Chose lowest utilized file for cleaning. fileChosen: 0xc (adjustment disabled) totalUtilization: 1 bestFileUtilization: 0 isProbe: false
2015-02-03 14:33:32.137 UTC INFO [rg3-rn2] JE: CleanerRun 13 ends on file 0xc probe=false invokedFromDaemon=true finished=true fileDeleted=false nEntriesRead=1129 nINsObsolete=64 nINsCleaned=2 nINsDead=0 nINsMigrated=2 nBINDeltasObsolete=2 nBINDeltasCleaned=0 nBINDeltasDead=0 nBINDeltasMigrated=0 nLNsObsolete=971 nLNsCleaned=88 nLNsDead=0 nLNsMigrated=88 nLNsMarked=0 nLNQueueHits=73 nLNsLocked=0 logSummary=<CleanerLogSummary endFileNumAtLastAdjustment="0xe" initialAdjustments="5" recentLNSizesAndCounts=""> inSummary=<INSummary totalINCount="68" totalINSize="7570" totalBINDeltaCount="2" totalBINDeltaSize="254" obsoleteINCount="66" obsoleteINSize="7029" obsoleteBINDeltaCount="2" obsoleteBINDeltaSize="254"/> estFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="102482" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> recalcFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="0" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> lnSizeCorrection=NaN newLnSizeCorrection=NaN estimatedUtilization=0 correctedUtilization=0 recalcUtilization=0 correctionRejected=false
Log files are not delete even if empty as seen using DBSpace utility:
Space -h /mam2g/data/sn1/u01/rg2-rn1/env/ib/kvstore.jar com.sleepycat.je.util.Db
File Size (KB) % Used
00000000 12743 0
00000001 12785 0
00000002 12725 0
00000003 12719 0
00000004 12703 0
00000005 12751 0
00000006 12795 0
00000007 12725 0
00000008 12752 0
00000009 12720 0
0000000a 12723 0
0000000b 12764 0
0000000c 12715 0
0000000d 12799 0
0000000e 12724 1
0000000f 5717 0
TOTALS 196867 0
Here is the configured topology:
kv-> show topology
store=MMS-KVstore numPartitions=90 sequence=106
zn: id=zn1 name=MAMHA repFactor=3 type=PRIMARY
sn=[sn1] zn:[id=zn1 name=MAMHA] 192.168.144.11:5000 capacity=3 RUNNING
[rg1-rn1] RUNNING
single-op avg latency=4.414467 ms multi-op avg latency=0.0 ms
[rg2-rn1] RUNNING
single-op avg latency=1.5962526 ms multi-op avg latency=0.0 ms
[rg3-rn1] RUNNING
single-op avg latency=1.3068943 ms multi-op avg latency=0.0 ms
sn=[sn2] zn:[id=zn1 name=MAMHA] 192.168.144.12:6000 capacity=3 RUNNING
[rg1-rn2] RUNNING
single-op avg latency=1.5670061 ms multi-op avg latency=0.0 ms
[rg2-rn2] RUNNING
single-op avg latency=8.637241 ms multi-op avg latency=0.0 ms
[rg3-rn2] RUNNING
single-op avg latency=1.370075 ms multi-op avg latency=0.0 ms
sn=[sn3] zn:[id=zn1 name=MAMHA] 192.168.144.35:7000 capacity=3 RUNNING
[rg1-rn3] RUNNING
single-op avg latency=1.4707285 ms multi-op avg latency=0.0 ms
[rg2-rn3] RUNNING
single-op avg latency=1.5334034 ms multi-op avg latency=0.0 ms
[rg3-rn3] RUNNING
single-op avg latency=9.05199 ms multi-op avg latency=0.0 ms
shard=[rg1] num partitions=30
[rg1-rn1] sn=sn1
[rg1-rn2] sn=sn2
[rg1-rn3] sn=sn3
shard=[rg2] num partitions=30
[rg2-rn1] sn=sn1
[rg2-rn2] sn=sn2
[rg2-rn3] sn=sn3
shard=[rg3] num partitions=30
[rg3-rn1] sn=sn1
[rg3-rn2] sn=sn2
[rg3-rn3] sn=sn3
Why empty files are not delete by cleaner? Why empty log files are protected by replicas if all the replicas seam to be aligned with the master ?
java -jar /mam2g/kv-3.2.5/lib/kvstore.jar ping -host 192.168.144.11 -port 5000
Pinging components of store MMS-KVstore based upon topology sequence #106
Time: 2015-02-03 13:44:57 UTC
MMS-KVstore comprises 90 partitions and 3 Storage Nodes
Storage Node [sn1] on 192.168.144.11:5000 Zone: [name=MAMHA id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC Build id: 7ab4544136f5
Rep Node [rg1-rn1] Status: RUNNING,MASTER at sequence number: 24,413 haPort: 5011
Rep Node [rg2-rn1] Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 5012
Rep Node [rg3-rn1] Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 5013
Storage Node [sn2] on 192.168.144.12:6000 Zone: [name=MAMHA id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC Build id: 7ab4544136f5
Rep Node [rg3-rn2] Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 6013
Rep Node [rg2-rn2] Status: RUNNING,MASTER at sequence number: 13,277 haPort: 6012
Rep Node [rg1-rn2] Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 6011
Storage Node [sn3] on 192.168.144.35:7000 Zone: [name=MAMHA id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC Build id: 7ab4544136f5
Rep Node [rg1-rn3] Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 7011
Rep Node [rg2-rn3] Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 7012
Rep Node [rg3-rn3] Status: RUNNING,MASTER at sequence number: 12,829 haPort: 7013Solved setting a non documented parameter " je.rep.minRetainedVLSNs"
The solution is described in NoSql forum: Store cleaning policy -
WLS 7.0.4 - JMS Connection Factory - Server Affinity - issues in log file
<b>WLS 7.0.4 - JMS Connection Factory - Server Affinity - issues in log file</b>
We are using WLS 7.0.4 - One of JMS connection factory setting in admin console we selected "Server Affinity" options.
We see this messages appear in Weblogic log file,
####<Apr 24, 2006 1:56:53 AM EDT> <Error> <Cluster> <liberatenode4.dc2.adelphia.com> <node4_svr> <ExecuteThrea
d: '4' for queue: '__weblogic_admin_rmi_queue'> <kernel identity> <> <000123> <Conflict start: You tried to bi
nd an object under the name sbetrmi2 in the JNDI tree. The object you have bound from liberatenode2.dc2.adelp
hia.com is non clusterable and you have tried to bind more than once from two or more servers. Such objects ca
n only deployed from one server.>
and then,
####<Apr 24, 2006 1:58:12 AM EDT> <Error> <Cluster> <liberatenode5.dc2.adelphia.com> <node5_svr> <ExecuteThrea
d: '7' for queue: '__weblogic_admin_rmi_queue'> <kernel identity> <> <000125> <Conflict Resolved: sbetrmi2 for
the object from liberatenode5.dc2.adelphia.com under the bind name sbetrmi2 in the JNDI tree.>
Should we use 'load balancing option' instead of 'server affinity' ?
Any thuoghts?
Thanks in adv.
VijayTest Reply
<Vijay Kumar> wrote in message news:[email protected]..
> <b>WLS 7.0.4 - JMS Connection Factory - Server Affinity - issues in log
> file</b>
>
> We are using WLS 7.0.4 - One of JMS connection factory setting in admin
> console we selected "Server Affinity" options.
>
> We see this messages appear in Weblogic log file,
> ####<Apr 24, 2006 1:56:53 AM EDT> <Error> <Cluster>
> <liberatenode4.dc2.adelphia.com> <node4_svr> <ExecuteThrea
> d: '4' for queue: '__weblogic_admin_rmi_queue'> <kernel identity> <>
> <000123> <Conflict start: You tried to bi
> nd an object under the name sbetrmi2 in the JNDI tree. The object you have
> bound from liberatenode2.dc2.adelp
> hia.com is non clusterable and you have tried to bind more than once from
> two or more servers. Such objects ca
> n only deployed from one server.>
>
> and then,
> ####<Apr 24, 2006 1:58:12 AM EDT> <Error> <Cluster>
> <liberatenode5.dc2.adelphia.com> <node5_svr> <ExecuteThrea
> d: '7' for queue: '__weblogic_admin_rmi_queue'> <kernel identity> <>
> <000125> <Conflict Resolved: sbetrmi2 for
> the object from liberatenode5.dc2.adelphia.com under the bind name
> sbetrmi2 in the JNDI tree.>
>
>
> Should we use 'load balancing option' instead of 'server affinity' ?
>
> Any thuoghts?
>
> Thanks in adv.
> Vijay -
Data Services 4.0 Designer. Job Execution but empty log file no matter what
Hi all,
am running DS 4.0. When i execute my batch_job via designer, log window pops up but is blank. i.e. cannot see any trace messages.
doesn't matter if i select "Print all trace messages" in execution properties.
Jobserver is running on a seperate server. The only thing i have locally is just my designer.
if i log into the Data Services management console and select the job server, i can see trace and error logs from the job. So i guess what i need is for this stuff to show up in my designer?
Did i miss a step somewhere?
can't find anything in docs about this.
thanks
Edited by: Andrew Wangsanata on May 10, 2011 11:35 AM
Added additional detailawesome. Thanks Manoj
I found the log file. in it relevant lines for last job i ran are
(14.0) 05-11-11 16:52:27 (2272:2472) JobServer: Starting job with command line -PLocaleUTF8 -Utip_coo_ds_admin
-P+04000000001A030100100000328DE1B2EE700DEF1C33B1277BEAF1FCECF6A9E9B1DA41488E99DA88A384001AA3A9A82E94D2D9BCD2E48FE2068E59414B12E
48A70A91BCB -ek******** -G"70dd304a_4918_4d50_bf06_f372fdbd9bb3" -r1000 -T1073745950 -ncollect_cache_stats
-nCollectCacheSize -ClusterLevelJOB -Cmxxx -CaDesigner -Cjxxx -Cp3500 -CtBatch -LocaleGV
-BOESxxx.xxx.xxx.xxx -BOEAsecLDAP -BOEUi804716
-BOEP+04000000001A0301001000003F488EB2F5A1CAB2F098F72D7ED1B05E6B7C81A482A469790953383DD1CDA2C151790E451EF8DBC5241633C1CE01864D93
72DDA4D16B46E4C6AD -Sxxx.xxx.xxx -NMicrosoft_SQL_Server -Qlocal_repo coo ds local
repo_azdzgq4dnuxbm4xeriey1_e" -l"C:\Program Files (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
repo_azdzgq4dnuxbm4xeriey1_e/trace_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -z"C:\Program Files
(x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
repo_azdzgq4dnuxbm4xeriey1_e/error_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -w"C:\Program Files
(x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
repo_azdzgq4dnuxbm4xeriey1_e/monitor_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -Dt05_11_2011_16_52_27_9
(BODI-850052)
(14.0) 05-11-11 16:52:27 (2272:2472) JobServer: StartJob : Job '05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3' with pid '148' is kicked off
(BODI-850048)
(14.0) 05-11-11 16:52:28 (2272:2072) JobServer: Sending notification to <inet:10.165.218.xxx:56511> with message type <4> (BODI-850170)
(14.0) 05-11-11 16:52:28 (2272:2472) JobServer: AddChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
<inet:10.165.218.xxx:56511>. (BODI-850003)
(14.0) 05-11-11 17:02:32 (2272:2472) JobServer: RemoveChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
<inet:10.165.218.xxx:56511>. (BODI-850003)
(14.0) 05-11-11 19:57:45 (2272:2468) JobServer: GetRunningJobs() success. (BODI-850058)
(14.0) 05-11-11 19:57:45 (2272:2468) JobServer: PutLastJobs Success. (BODI-850001)
(14.0) 05-11-11 19:57:45 (2272:2072) JobServer: Sending notification to <inet:10.165.218.xxx:56511> with message type <5> (BODI-850170)
(14.0) 05-11-11 19:57:45 (2272:2472) JobServer: GetHistoricalLogStatus() Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
(14.0) 05-11-11 19:57:45 (2272:2472) JobServer: GetHistoricalLogStatus() Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
it does not look like i have any errors with respect to connectivity? ( or any errors at all....)
Please advise on what, if anything you notice from log file and/or next steps i can take.
thanks. -
Incosistencies between Analyzer Server Console and stout.log file
Hi,<BR><BR>In stout.log file of application server file there is a record:<BR>"Setting Current User Count To: 2 Users. Maximum Concurrent Licensed User Count Is: 10 Users." So 2 licences are ured, but checking "Analyer Server Console" there is only one user connected.<BR><BR>After restarting computer "Analyzer Server Console" and stout.log number of users are sinhronized. But I don't know what happens, but after some time this two parameters are not sinhronized anymore.<BR><BR>My problem: I have to show the number of user licences used and I am reading info rom stout.log. But something is not correct - it looks like stout.log doesn't show correct values?<BR><BR>Do I need to specify some setting or is there a bug in code?<BR><BR><BR>My system:<BR>Hyperion Analytic Server version 7.1.0<BR>Hyperion Analyzer Server version 7.0.1.8.01830<BR>IBM DB2 Workgroup Edition version 8 fixpack 9<BR>Tomcat version 4<BR>Windows 2003 Server
Hi grofaty.<BR><BR>We use 7.0.0.0.01472 and I had experienced the same behaviour, <BR>Analyzer Server Console shows one more session than stdout.log.<BR><BR>If this difference 1 is a static value than you can assume it as an systematic bug...and do your license counting on it...<BR><BR>But again the Analyzer Server Console is not good as it should be for productive usage because all the information is only logged online til the next application restart. E.g. it is not helpful in using it for user tracking purposes. Do you use the stdout.log in such a way or have an idea how to grep measures for session logging analysis:<BR> - Session ID <BR> - User ID<BR> - Client ID <BR> - Total Number of Requests <BR> - Average Response (sec) <BR> - Login Time<BR> - Number of concurrent sessions<BR><BR>?
-
Centralized logging producing empty log files searches
Not sure what I am doing wrong here. Experimenting with Lync 2013 Centralized logging. I started the AlwaysOn scenario which was off by default. I checked the directories on all 3 of my FE servers and a bunch of ETL files are present so it's doing something.
Thing is, no matter how I search, the output or the log file if I pipe the output is always zero. Following the Microsoft document on Centralized Logging. They make it look so easy. Anyone has success with this tool? Seems like a nice feature and more convenient
than ocslogger but its not producing the correct search results.I am quickly finding out that this utility is nothing but a headache. I am getting errors in the Lync Server log telling me the threshold for logging has been reached. I changed the
CacheFileMaxDiskUsage from 80 to 20. 80%!! Seriously. Who wants a utility to take 80% of the disk space! Even at 20%, with a 125GB drive, I should be able to go up to 25GB. The ETL file was 14MB and I started getting errors saying the threshold
was reached!
Then, I could not stop the scenario. I tried 3 times. Either it would keep running or I got some weird error. I finally spelled AlwaysOn with the caps like it was case sensitive and it worked. This utility is whacked. Maybe I am doing something wrong.
According to MS article CacheFileMaxDiskUsage is Defined as the percentage of disk space that can be used by the cache files. So, 20 for this value means 20% of 125GB or if its talking about free disk space, 18GB in my case. Below is the error I am
getting: 90,479.939.584 is the amount of free space on the disk. I did do the search again and it did work this time. I restarted the Agent on all FE servers. If I can get around this threshold error I think I am in business.
Lync Server Centralized Logging Service Agent Service reached the local disk usage threshold and no network share is configured
EtlFileFolder: c:\temp\tracing - 90,479,939,584 (67.47 %)
CacheFileLocalMaxDiskUsage: 20 %
CacheFileLocalFolders:
c:\temp\tracing - 90,479,939,584 (67.47 %)
CacheFileNetworkFolder: Not set
Cause: Lync Server Centralized Logging Service Agent Service will stop tracing when the local disk usage threshold is reached and no network share is configured. Verify CLS configuration using Get-CsCentralizedLoggingConfiguration. Necessary scenarios will
need to be re-enabled once the more space is made available locally or a network share is configured
Resolution:
Free up local disk space, or increase disk usage threshold for CLS, or configure network share with write permissions for Network Service account -
Hello,
For some transport orders only but not all, when I want to see the logs of the import (via SE01 or STMS) I only get a line of "##########". The physical file under UNIX is empty. I don't find anything in the SAP notes.
Has anybody ever experienced that ???
We have 1 central instance and 4 AS. The directory /usr/sap/trans is mounted via NFS.
Rgds,
Y.
Message was edited by:
Youssef ANEGAYIf the Directory is not mounted properly, it happened once for us... We reported the problem to DB admin and he said the mount is not done propery...then again after a re mount of the DB its all OK...
Hope it helps..
Br,
Sri
Award points fo rhelpful answers -
Printing Word documents to PDF from Office Web Apps Server 2013 yields inaccessible PDF files
I have Office Web Apps Server 2013 SP1 with March 2015 CU applied. The version shows up as "15.0.4569.1506" using Todd Klindt's
method. When anonymous users view the Word documents online through the Office Web Apps, they have an option to print them to PDF. The PDF file that gets generated as a result isn't accessible
(i.e. it isn't WCAG-compliant). using Adobe Acrobat Pro to check it I can see that it is missing tags, for example. The source Word document was made fully accessible from the point of view of Microsoft Office accessibility tools.
When viewing the same document in Word Online in Office 365, I get an extra option "Download as PDF". The PDF document produced is accessible. If I save a Word 2013 document as a PDF from within the Word editor, it also generates accessible
PDF.
I was wondering if the reason for the accessibility problem that I have misconfigured my on-premises Office Web Apps, or perhaps the accessibility compliance isn't built in yet. If the latter is true, I was wondering if someone could indicate whether
or not this is on the list of upcoming CUs.
thanks in advance.Just check my WAC install, no this is likely an Office online only feature, at least at this point in time.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
Steps to empty SAPDB (MaxDB) log file
Hello All,
i am on Redhat Unix Os with NW 7.1 CE and SAPDB as Back end. I am trying to login but my log file is full. Ii want to empty log file but i havn't done any data backup yet. Can anybody guide me how toproceed to handle this problem.
I do have some idea what to do like the steps below
1. take databackup (but i want to skip this step if possible) since this is a QA system and we are not a production company.
2. Take log backup using same methos as data backup but with Log type (am i right or there is somethign else)
3. It will automatically overwrite log after log backups.
or should i use this as an alternative, i found this in note Note 869267 - FAQ: SAP MaxDB LOG area
Can the log area be overwritten cyclically without having to make a log backup?
Yes, the log area can be automatically overwritten without log backups. Use the DBM command
util_execute SET LOG AUTO OVERWRITE ON
to set this status. The behavior of the database corresponds to the DEMO log mode in older versions. With version 7.4.03 and above, this behavior can be set online.
Log backups are not possible after switching on automatic overwrite. Backup history is broken down and flagged by the abbreviation HISTLOST in the backup history (dbm.knl file). The backup history is restarted when you switch off automatic overwrite without log backups using the command
util_execute SET LOG AUTO OVERWRITE OFF
and by creating a complete data backup in the ADMIN or ONLINE status.
Automatic overwrite of the log area without log backups is NOT suitable for production operation. Since no backup history exists for the following changes in the database, you cannot track transactions in the case of recovery.
any reply will be highly appreciated.
Thanks
ManiHello Mani,
1. Please review the document u201CUsing SAP MaxDB X Server Behind a Firewallu201D at MAXDB library
http://maxdb.sap.com/doc/7_7/44/bbddac91407006e10000000a155369/content.htm
u201CTo enable access to X Server (and thus the database) behind a firewall using a client program such as Database Studio, open the necessary ports in your firewall and restrict access to these ports to only those computers that need to access the database.u201D
Is the database server behind a Firewall? If yes, then the Xserver port need to be open. You could restrict access to this port to the computers of your database administrators, for example.
Is "nq2host" the name of the database server? Could you ping to the server "nq2host" from your machine?
2. And if the database server and your PC in the local area NetWork you could start the x_server on the database server & connect to the database using the DB studio on your PC, as you already told by Lars.
See the document u201CNetwork Communicationu201D at
http://maxdb.sap.com/doc/7_7/44/d7c3e72e6338d3e10000000a1553f7/content.htm
Thank you and best regards, Natalia Khlopina -
Resisting the creation of new log files when SQL SERVER is restarted
Hi,
I know that when SQL server is restarted new log files are created. But is it possible to resist creating new log fils and insert log data in the existing log files that are used before restarting the sql serverHello,
I guess Raghvendra answered your question. And as per your previous post its not clear what you want to ask an you did not revert. Again if your issue is solved appreciate if you can please mark the answer and vote the posts helpful.
Can I continue to log in the same file.?
What does this line mean exactly ? Yes SQL Server will continue to use same transaction log file(LDF file) for writing information as it was using before shutdown. If you are talking about errorlog file a new errorlog file would be created which you can
read using
sp_readerrorlog
Even if you stopped SQL Server service mistakenly its not that server is gone. Yes when you stopped the server all inflight transactions are rolled back. And when SQL Server would come online it would undergo crash recovery and would bring all the databases
online by reading transaction log file and performing redo and undo of information. All committed transaction would be rolled forward and uncommitted would be rolled back.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Hardening & Keeping Log files in 10.9
I'm not in IT but I'm trying to Harden our Macs to please a client. I found several Hardening Tips & Guides written for older versions of OS X, but none for 10.9. Does anyone know of a Hardening Guide written with commands 10.9.
Right now I found a guide written for 10.8 and have been mostly sucessful implementing it except for a couple sticking points.
They suggested keeping security.log files for 30 days, I found out that they got rid of security.log and most of its functionality is in authd.log. But I can't figure out how to keep authd logs for 30 days. Does anyone know how I can set this?
I also need to keep install.log for 30 days as well, but not seeing a way to control this in /etc/newsyslog.conf. Anyone know how to set this as well.
Does anyone know if the following audit flags should still work: lo,ad,fd,fm,-all?
I'm trying to keep system.log & appfirewall.log for 30 days as well, I've figured out these have moved from /etc/newsyslog.conf to etc/asl.conf, but I'm not sure if I've set this correctly. Right now I have added "store_ttl=30" to these 2 lines asl.conf. Should this work? Is there a better way to do this?
> system.log mode=0640 format=bsd rotate=seq compress file_max=5M all_max=100M store_ttl=30
? [= Facility com.apple.alf.logging] file appfirewall.log file_max=5M all_max=100M store_ttl=30Hi Alex...
Jim,
who came up with this solution????
I got these solutions for creating log files and reconstructing the database from this forum a while back....probably last year sometime.
Up until recently after doing this, there has been
no
problem - server runs as it should.
I dare to say pure luck.
The reason I do
this is because if I don't, the server does NOT
automatically create new empty .log files, and
when
it fills the current log file, it "crashes" with
the
"unkown mailbox path" displayed for all mailboxes.
I would think you some fundamental underlying issue
there.
I assume by "unkown mailbox path" problem you mean a
corrupt cyrus database?
Yes, I believe that db corruption is the case...
You should never ever manually modify anthing inside
cyrus' configuration database. This is just a
desaster waiting to happen.
If your database gets regularly corrupted, we need to
investigate why. Many possible reasons: related
processes crashing, disk failure, power
failure/surges and so on.
Aha!...about a month ago - thinking back to when this problem started - there was a power outage here, over a weekend! The hard drive was "kicked out" of the server box when I returned to work on that Monday....and that's when this problem started!
I suggest you increase the logging level for a few
days and keep an eye on things. Then post log
extracts and /etc/imapd.conf and we'll take it from
there.
Alex
Ok, thanks, will do!
P.S. Download mailbfr from here:
http://osx.topicdesk.com/downloads/
This will allow you to easily rebuild if needed and
most important to do proper backups of your mail
services.
Thanks for that, too. I will check it out and return to this forum with an update in the near future.
Jim
Mac OS X (10.3.9) -
Where is the OC4J server logs? I have found several empty log files in the j2ee_home\logs directory. How do I configure more log details? Also, do servlet/JSP logs go somewhere else?i.e. to the APache logs directory?
I am using Oracle9iAS Release 2 (9.0.3)
Thanks,
AllanIn oracle9ias env
also see
/ora9ias/opmn/logs
You could use log4J as well.
-Prasad -
Node.js loss of permission to write/create log files
We have been operating Node.js as a worker role cloud service. To track server activity, we write log files (via log4js) to C:\logs
Originally the logging was configured with size-based roll-over. e.g. new file every 20MB. I noticed on some servers the sequencing was uneven
socket.log <-- current active file
socket.log.1
socket.log.3
socket.log.5
socket.log.7
it should be
socket.log.1
socket.log.2
socket.log.3
socket.log.4
Whenever there is uneven sequence, i realise the beginning of each file revealed the Node process was restarted. From Windows Azure event log, it further indicated worker role hosting mechanism found node.exe to have terminated abruptly.
With no other information to clue what is exactly happening, I thought there was some fault with log4js roll over implementation (updating to latest versions did not help). Subsequently switched to date-based roll-over mode; saw that roll-over happened every
midnight and was happy with it.
However some weeks later I realise the roll-over was (not always, but pretty predictably) only happening every alternate midnight.
socket.log-2014-06-05
socket.log-2014-06-07
socket.log-2014-06-09
And each file again revealed that midnight the roll-over did not happen, node.exe was crashing again. Additional logging on uncaughtException and exit happens showed nothing; which seems to suggest node.exe was killed by external influence (e.g. process
kill) but it was unfathomable anything in the OS would want to kill node.exe.
Additionally, having two instances in the cloud service, we observe the crashing of both node.exe within minutes of each other. Always. However if we had two server instances brought up on different days, then the "schedule" for crashing would
be offset by the difference of the instance launch dates.
Unable to trap more details what's going on, we tried a different logging library - winston. winston has the additional feature of logging uncaughtExceptions so it was not necessary to manually log that. Since winston does not have date-based roll-over it
went back to size-based roll-over; which obviously meant no more midnight crash.
Eventually, I spotted some random midday crash today. It did not coincide with size-based rollover event, but winston was able to log an interesting uncaughtException.
"date": "Wed Jun 18 2014 06:26:12 GMT+0000 (Coordinated Universal Time)",
"process": {
"pid": 476,
"uid": null,
"gid": null,
"cwd": "E:
approot",
"execPath": "E:\\approot
node.exe",
"version": "v0.8.26",
"argv": ["E:\\approot\\node.exe", "E:\\approot\\server.js"],
"memoryUsage":
{ "rss": 80433152, "heapTotal": 37682920, "heapUsed": 31468888 }
"os":
{ "loadavg": [0, 0, 0], "uptime": 163780.9854492 }
"trace": [],
"stack": ["Error: EPERM, open 'c:\\logs\\socket1.log'"],
"level": "error",
"message": "uncaughtException: EPERM, open 'c:\\logs\\socket1.log'",
"timestamp": "2014-06-18T06:26:12.572Z"
Interesting question: the Node process _was_ writing to socket1.log all along; why would there be a sudden EPERM error?
On restart it could resume writing to the same log file. Or in previous cases it would seem like the lack of permission to create a new log file.
Any clues on what could possibly cause this? On a "scheduled" basis per server? Given that it happens so frequently and in sync with sister instances in the cloud service, something is happening in the back scenes which I cannot put a finger to.
thanks
The melody of logic will always play out the truth. ~ Narumi Ayumu, SpiralHi,
It is strange. From your description, how many instances of your worker role? Do you store the log file on your VM local disk? To avoid this question, the best choice is you could store your log file into azure storage blob . If you do this, all log
file will be stored on blob storage. About how to use azure blob storage, please see this docs:
http://azure.microsoft.com/en-us/documentation/articles/storage-introduction/
Please try it.
If I misunderstood, please let me know.
Regards,
Will
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.
Maybe you are looking for
-
Spool output in Excel sheet format
Hi, We have background job , which creates a spool output in Internet Explorer format. Customer needs this in Excel format. Wondering, if any authorizations to be changed for file type? or this is a set in program level? Thanks, Sam
-
From which infotype we get details about empoyee group ,subgroup company co
From which infotype we get details about empoyee group ,subgroup company code. based on the selected job position for new employee.
-
-24988 DBM -4008,Unknown user name/password combination
Hello, I am not able to connect Whit database Manager Gui 7.6 with the standard user and password, (control,control e.g). The Database Manager Client show the follow error: -24988 DBM -4008,Unknown user name/password combination but from the dbhost w
-
Gray Screen of death after kernel panic
Hi guys, Today I got a kernel panic and after restarting my macbook pro (early 2011) it stucks at a gray screen (without the Apple logo). I tried a SMC-reset and also a NVRAM-reset but this doesn't help. Furthermore I tried to do a hardware test and
-
I spent 2 days waiting and being transferred from dept to dept and they say its the banks fault......I have the info right in front of me and I entered the new infor just like its on the bill. I really and fed up with the fact that apple comes out wi