GRC AC 10 SPM Empty Logs
Hello Experts ,
Strange thing that I am observing , while obtaining FF Logs in the UI , some logs are blank while some are populated in the session details . What could be reason for this ? The logs under "Log Summary Reports"are empty for some sessions .Is there a way I can cross check . Any Notes regarding the same. SM20 Logs in the backend systems are proper and show transaction logs where as empty in grc
Reg,
Anthony
Hello Simon,
We applied the note for timezone settings , but still not all sm20 called transactions are appearing in the SPM logs .Are there any other notes / should we raise a message with SAP ?
Best Regards,
Anthony Hjelm
Similar Messages
-
Empty Log File - log settings will not save
Description of Problem or Question:
Cannot get logging to work in folder D:\Program Files\Business Objects\Dashboard and Analytics 12.0\server\log
(empty log file is created)
Product\Version\Service Pack\Fixpack (if applicable):
BO Enterorise 12.0
Relevant Environment Information (OS & version, java or .net & version, DB & version):
Server: windows Server 2003 Enterprise SP2.
Database Oracle 10g
Client : Vista
Sporadic or Consistent (if applicable):
Consistent
What has already been tried (where have you searched for a solution to your question/problem):
Searched forum, SMP
Steps to Reproduce (if applicable):
From InfoViewApp, logged in as Admin
Open ->Dashboard and Analytics Setp -> Parameters -> Trace
Check "Log to folder" and "SQL Queries", Click Apply.
Now, navigate away and return to this page - the "Log to folder" is unchecked. Empty log file is created.Send Apple feedback. They won't answer, but at least will know there is a problem. If enough people send feedback, it may get the problem solved sooner.
Feedback
Or you can use your Apple ID to register with this site and go the Apple BugReporter. Supposedly you will get an answer if you submit feedback.
Feedback via Apple Developer
Do a backup.
Quit the application.
Go to Finder and select your user/home folder. With that Finder window as the front window, either select Finder/View/Show View options or go command - J. When the View options opens, check ’Show Library Folder’. That should make your user library folder visible in your user/home folder. Select Library. Then go to Preferences/com.apple.systempreferences.plist. Move the .plist to your desktop.
Restart, open the application and test. If it works okay, delete the plist from the desktop.
If the application is the same, return the .plist to where you got it from, overwriting the newer one.
Thanks to leonie for some information contained in this. -
Empty Log files not deleted by Cleaner
Hi,
we have a NoSql database installed on 3 nodes with a replication factor of 3 (see exact topology below).
We run a test which consisted in the following operations repeated in a loop : store a LOB, read it , delete it.
store.putLOB(key, new ByteArrayInputStream(source),Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
store.getLOB(key,Consistency.NONE_REQUIRED, 5, TimeUnit.SECONDS);
store.deleteLOB(key, Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
During the test the space occupied by the database continues to grow !!
Cleaner threads are running but logs these warnings:
2015-02-03 14:32:58.936 UTC WARNING [rg3-rn2] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
2015-02-03 14:32:58.937 UTC WARNING [rg3-rn2] JE: Cleaner has 12 files not deleted because they are protected by replication.
2015-02-03 14:32:58.920 UTC WARNING [rg3-rn1] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
2015-02-03 14:32:58.921 UTC WARNING [rg3-rn1] JE: Cleaner has 12 files not deleted because they are protected by replication.
2015-02-03 14:32:58.908 UTC WARNING [rg3-rn3] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
2015-02-03 14:32:58.909 UTC WARNING [rg3-rn3] JE: Cleaner has 12 files not deleted because they are protected by replication.
2015-02-03 14:33:31.704 UTC INFO [rg3-rn2] JE: Chose lowest utilized file for cleaning. fileChosen: 0xc (adjustment disabled) totalUtilization: 1 bestFileUtilization: 0 isProbe: false
2015-02-03 14:33:32.137 UTC INFO [rg3-rn2] JE: CleanerRun 13 ends on file 0xc probe=false invokedFromDaemon=true finished=true fileDeleted=false nEntriesRead=1129 nINsObsolete=64 nINsCleaned=2 nINsDead=0 nINsMigrated=2 nBINDeltasObsolete=2 nBINDeltasCleaned=0 nBINDeltasDead=0 nBINDeltasMigrated=0 nLNsObsolete=971 nLNsCleaned=88 nLNsDead=0 nLNsMigrated=88 nLNsMarked=0 nLNQueueHits=73 nLNsLocked=0 logSummary=<CleanerLogSummary endFileNumAtLastAdjustment="0xe" initialAdjustments="5" recentLNSizesAndCounts=""> inSummary=<INSummary totalINCount="68" totalINSize="7570" totalBINDeltaCount="2" totalBINDeltaSize="254" obsoleteINCount="66" obsoleteINSize="7029" obsoleteBINDeltaCount="2" obsoleteBINDeltaSize="254"/> estFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="102482" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> recalcFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="0" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> lnSizeCorrection=NaN newLnSizeCorrection=NaN estimatedUtilization=0 correctedUtilization=0 recalcUtilization=0 correctionRejected=false
Log files are not delete even if empty as seen using DBSpace utility:
Space -h /mam2g/data/sn1/u01/rg2-rn1/env/ib/kvstore.jar com.sleepycat.je.util.Db
File Size (KB) % Used
00000000 12743 0
00000001 12785 0
00000002 12725 0
00000003 12719 0
00000004 12703 0
00000005 12751 0
00000006 12795 0
00000007 12725 0
00000008 12752 0
00000009 12720 0
0000000a 12723 0
0000000b 12764 0
0000000c 12715 0
0000000d 12799 0
0000000e 12724 1
0000000f 5717 0
TOTALS 196867 0
Here is the configured topology:
kv-> show topology
store=MMS-KVstore numPartitions=90 sequence=106
zn: id=zn1 name=MAMHA repFactor=3 type=PRIMARY
sn=[sn1] zn:[id=zn1 name=MAMHA] 192.168.144.11:5000 capacity=3 RUNNING
[rg1-rn1] RUNNING
single-op avg latency=4.414467 ms multi-op avg latency=0.0 ms
[rg2-rn1] RUNNING
single-op avg latency=1.5962526 ms multi-op avg latency=0.0 ms
[rg3-rn1] RUNNING
single-op avg latency=1.3068943 ms multi-op avg latency=0.0 ms
sn=[sn2] zn:[id=zn1 name=MAMHA] 192.168.144.12:6000 capacity=3 RUNNING
[rg1-rn2] RUNNING
single-op avg latency=1.5670061 ms multi-op avg latency=0.0 ms
[rg2-rn2] RUNNING
single-op avg latency=8.637241 ms multi-op avg latency=0.0 ms
[rg3-rn2] RUNNING
single-op avg latency=1.370075 ms multi-op avg latency=0.0 ms
sn=[sn3] zn:[id=zn1 name=MAMHA] 192.168.144.35:7000 capacity=3 RUNNING
[rg1-rn3] RUNNING
single-op avg latency=1.4707285 ms multi-op avg latency=0.0 ms
[rg2-rn3] RUNNING
single-op avg latency=1.5334034 ms multi-op avg latency=0.0 ms
[rg3-rn3] RUNNING
single-op avg latency=9.05199 ms multi-op avg latency=0.0 ms
shard=[rg1] num partitions=30
[rg1-rn1] sn=sn1
[rg1-rn2] sn=sn2
[rg1-rn3] sn=sn3
shard=[rg2] num partitions=30
[rg2-rn1] sn=sn1
[rg2-rn2] sn=sn2
[rg2-rn3] sn=sn3
shard=[rg3] num partitions=30
[rg3-rn1] sn=sn1
[rg3-rn2] sn=sn2
[rg3-rn3] sn=sn3
Why empty files are not delete by cleaner? Why empty log files are protected by replicas if all the replicas seam to be aligned with the master ?
java -jar /mam2g/kv-3.2.5/lib/kvstore.jar ping -host 192.168.144.11 -port 5000
Pinging components of store MMS-KVstore based upon topology sequence #106
Time: 2015-02-03 13:44:57 UTC
MMS-KVstore comprises 90 partitions and 3 Storage Nodes
Storage Node [sn1] on 192.168.144.11:5000 Zone: [name=MAMHA id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC Build id: 7ab4544136f5
Rep Node [rg1-rn1] Status: RUNNING,MASTER at sequence number: 24,413 haPort: 5011
Rep Node [rg2-rn1] Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 5012
Rep Node [rg3-rn1] Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 5013
Storage Node [sn2] on 192.168.144.12:6000 Zone: [name=MAMHA id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC Build id: 7ab4544136f5
Rep Node [rg3-rn2] Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 6013
Rep Node [rg2-rn2] Status: RUNNING,MASTER at sequence number: 13,277 haPort: 6012
Rep Node [rg1-rn2] Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 6011
Storage Node [sn3] on 192.168.144.35:7000 Zone: [name=MAMHA id=zn1 type=PRIMARY] Status: RUNNING Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC Build id: 7ab4544136f5
Rep Node [rg1-rn3] Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 7011
Rep Node [rg2-rn3] Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 7012
Rep Node [rg3-rn3] Status: RUNNING,MASTER at sequence number: 12,829 haPort: 7013Solved setting a non documented parameter " je.rep.minRetainedVLSNs"
The solution is described in NoSql forum: Store cleaning policy -
Getting empty log files with log4j and WebLogic 10.0
Hi!
I get empty log files with log4j 1.2.13 and WebLogic 10.0. If I don't run the application in the application server, then the logging works fine.
The properties file is located in a jar in the LIB folder of the deployed project. If I change the name of the log file name in the properties file, it just creates a new empty file.
What could be wrong?
Thanks!I assume that when you change the name of the expected log file in the properties file, the new empty file is that name, correct?
That means you're at least getting that properties file loaded by log4j, which is a good sign.
As the file ends up empty, it appears that no logger statements are being executed at a debug level high enough for the current debug level. Can you throw in a logger.error() call at a point you're certain is executed? -
SPM Reporting - Log reports display on front end
Hi,
We have implemented GRC 5.3 and have an issue on the SPM reporting through the front end.
We have done config in dev and have the following jobs running in the background every 1 hour:
1. /VIRSA/ZVFATBAK
2. /VIRSA/ZVFAT_LOG_REPORT
3. /VIRSA/ZVFAT_V01
4. /VIRSA/ZVFAT_V03
The log on the back-end displays all activity of FF user, but when going to the front end I seem not to get any reports to display.
We have created the connectors via the config page, but when trying to drill through the selection criteria we found that the FF user does not come up as a selection variable. We see the system defined though.
Any suggestions how to get the reporting to display on the front-end?
Kind Regards, MelvinSantosh,
You should only need the /virsa/zvfatbak job running in the background. However, if the emails are not triggered for the log reports, you may wish to schedule the /virsa/zvfat_log_report program to run after completion of /virsa/zvfatbak as that is the program which actually sends the notifications.
Regarding the RFC user, check the authorisations held in the Firefighter Administrator role as those will not be too far away!
Simon -
Amigos,
Ainda estou com erro referente a thread ( ** ), para SEFAZ GO.
Execultei os passos orientados, porem ao rodar a /XNFE/UPDATE_ERP_STATUS_DIAL deu e mensagem:
Nº de log em falta para NF-e 52521102xxxxxxxxxxxxxx550010000127131337932349 (Nº mensagem /XNFE/APP059)
STATUS GRC:
Status NFe: OK (verde)
Stat.processo: 05 (Resultado recebido)
Satus GRC: 102 (nutilização de número homologado)
STATUS ERP:
Etapa: Em processamento; nenhuma ação manual necessária
Stat. doc: Rejeitada
SCS: 3 (Solicitação de rejeitada & autorizada para não consideração)
_LOW: G
Podem me ajudar dizendo se existe algum status incorreto no GRC ou no SAP, e se é por isso que a função está dando a mensagem "Nº de log em falta para NF-e 52..." ?
Muito obrigado
Rodrigo AlvesBom dia Rodrigo,
Desculpe, não tinha lido o suficiente sua mensagem na thread original.
É exatamente a mesma questão e peço que abra um chamado para SLL-NFE para reversão manual.
Atenciosamente, Fernando Da Ró -
Hello,
For some transport orders only but not all, when I want to see the logs of the import (via SE01 or STMS) I only get a line of "##########". The physical file under UNIX is empty. I don't find anything in the SAP notes.
Has anybody ever experienced that ???
We have 1 central instance and 4 AS. The directory /usr/sap/trans is mounted via NFS.
Rgds,
Y.
Message was edited by:
Youssef ANEGAYIf the Directory is not mounted properly, it happened once for us... We reported the problem to DB admin and he said the mount is not done propery...then again after a re mount of the DB its all OK...
Hope it helps..
Br,
Sri
Award points fo rhelpful answers -
After installing GRC 5.3-SPM-Not able to see all features in configuration
Hello Experts,
We have installed GRC-5.3 Superuser Privilege Management in JAVA standalone system and when we trying to configure we are not able to see the all features in configuration tab.
We have created a user and assigned all roles starting with FF then tried login to
http://<hostname>:<port>/webdynpro/dispatcher/sap.com/grc~ffappcomp/Firefighter
I can able to login successfully and when I go into configuration tab I cannot see all features, Could you please help me out.
Thanks & Regards,
NagarajuDear Nagaraju,
You may be seeing only one "Connectors" in configuration tab in SPM. If it is so, then you are not missing anything here. It is the only configuration we need to do in SPM at java side.
If it is not showing, that means you have missed some action to assign FireFIghter Role in SPM. Check it out.
Regards,
Sabita Das -
When i type in my log in and password and enter, the site either returns to the blank screens or does nothing. This is not limited to one or two sites. all of my financial institutions, electric company, insurance companies, even trying to register here so that i could ask this questions. In this situation, when i hit the register button, nothing happend, its as iff the button is inactive. i had to go to IE to register and post this question. I just tried to enter my new log in and pw on the screen in fire fox and nothing happens. Some sites behave differently when i hit the submit or log in button. the information seems to be transmitted but the screen comes right back to the original log in screen with blank fields. there are no error instructions that something does not match or is missing. then i go to IE and it works.
This started a few weeks ago. I cant identify anything that was added around that time but i tried a system restore and that did nothing. i deleted fire fox and downloaded the latest version 3.6.13 and it is still happeningThis issue can be caused by corrupted cookies.
Clear the cache and the cookies from sites that cause problems.
"Clear the Cache":
* Tools > Options > Advanced > Network > Offline Storage (Cache): "Clear Now"
"Remove Cookies" from sites causing problems:
* Tools > Options > Privacy > Cookies: "Show Cookies"
*http://kb.mozillazine.org/Cookies -
Data Services 4.0 Designer. Job Execution but empty log file no matter what
Hi all,
am running DS 4.0. When i execute my batch_job via designer, log window pops up but is blank. i.e. cannot see any trace messages.
doesn't matter if i select "Print all trace messages" in execution properties.
Jobserver is running on a seperate server. The only thing i have locally is just my designer.
if i log into the Data Services management console and select the job server, i can see trace and error logs from the job. So i guess what i need is for this stuff to show up in my designer?
Did i miss a step somewhere?
can't find anything in docs about this.
thanks
Edited by: Andrew Wangsanata on May 10, 2011 11:35 AM
Added additional detailawesome. Thanks Manoj
I found the log file. in it relevant lines for last job i ran are
(14.0) 05-11-11 16:52:27 (2272:2472) JobServer: Starting job with command line -PLocaleUTF8 -Utip_coo_ds_admin
-P+04000000001A030100100000328DE1B2EE700DEF1C33B1277BEAF1FCECF6A9E9B1DA41488E99DA88A384001AA3A9A82E94D2D9BCD2E48FE2068E59414B12E
48A70A91BCB -ek******** -G"70dd304a_4918_4d50_bf06_f372fdbd9bb3" -r1000 -T1073745950 -ncollect_cache_stats
-nCollectCacheSize -ClusterLevelJOB -Cmxxx -CaDesigner -Cjxxx -Cp3500 -CtBatch -LocaleGV
-BOESxxx.xxx.xxx.xxx -BOEAsecLDAP -BOEUi804716
-BOEP+04000000001A0301001000003F488EB2F5A1CAB2F098F72D7ED1B05E6B7C81A482A469790953383DD1CDA2C151790E451EF8DBC5241633C1CE01864D93
72DDA4D16B46E4C6AD -Sxxx.xxx.xxx -NMicrosoft_SQL_Server -Qlocal_repo coo ds local
repo_azdzgq4dnuxbm4xeriey1_e" -l"C:\Program Files (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
repo_azdzgq4dnuxbm4xeriey1_e/trace_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -z"C:\Program Files
(x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
repo_azdzgq4dnuxbm4xeriey1_e/error_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -w"C:\Program Files
(x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
repo_azdzgq4dnuxbm4xeriey1_e/monitor_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -Dt05_11_2011_16_52_27_9
(BODI-850052)
(14.0) 05-11-11 16:52:27 (2272:2472) JobServer: StartJob : Job '05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3' with pid '148' is kicked off
(BODI-850048)
(14.0) 05-11-11 16:52:28 (2272:2072) JobServer: Sending notification to <inet:10.165.218.xxx:56511> with message type <4> (BODI-850170)
(14.0) 05-11-11 16:52:28 (2272:2472) JobServer: AddChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
<inet:10.165.218.xxx:56511>. (BODI-850003)
(14.0) 05-11-11 17:02:32 (2272:2472) JobServer: RemoveChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
<inet:10.165.218.xxx:56511>. (BODI-850003)
(14.0) 05-11-11 19:57:45 (2272:2468) JobServer: GetRunningJobs() success. (BODI-850058)
(14.0) 05-11-11 19:57:45 (2272:2468) JobServer: PutLastJobs Success. (BODI-850001)
(14.0) 05-11-11 19:57:45 (2272:2072) JobServer: Sending notification to <inet:10.165.218.xxx:56511> with message type <5> (BODI-850170)
(14.0) 05-11-11 19:57:45 (2272:2472) JobServer: GetHistoricalLogStatus() Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
(14.0) 05-11-11 19:57:45 (2272:2472) JobServer: GetHistoricalLogStatus() Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
it does not look like i have any errors with respect to connectivity? ( or any errors at all....)
Please advise on what, if anything you notice from log file and/or next steps i can take.
thanks. -
GRC AC v10 SPM WF - Workflow Item not showing up in WF Inbox
GRC AC v10 - SP12
The outlook email notification for the Workflow Item goes out, but there is nothing in the NWBC Inbox for the WF Item. Subsitution is setup correctly.
Any ideas?
-johnHi John
this is probably be a silly question but what substitution did you set up for ZFF_CTL_01? I assume the item is in that user's inbox. Which user is meant to be receiving?
I also noticed this KB article (1589130) which mentions the delegated person needs GRAC_REQ authorisation. Have you checked if security access issue?
There was also mention that the delegated approver does not appear in the MSMP instance runtime (your screen shot suggests same situation unless you have not set up the delegation). SP14 delivers the fix or refer to 1915928 - UAM: Delegated Approver is not visible in the Instance status
Possibly have a look at both of them to see if they resolve your issue.
Regards
Colleen -
Centralized logging producing empty log files searches
Not sure what I am doing wrong here. Experimenting with Lync 2013 Centralized logging. I started the AlwaysOn scenario which was off by default. I checked the directories on all 3 of my FE servers and a bunch of ETL files are present so it's doing something.
Thing is, no matter how I search, the output or the log file if I pipe the output is always zero. Following the Microsoft document on Centralized Logging. They make it look so easy. Anyone has success with this tool? Seems like a nice feature and more convenient
than ocslogger but its not producing the correct search results.I am quickly finding out that this utility is nothing but a headache. I am getting errors in the Lync Server log telling me the threshold for logging has been reached. I changed the
CacheFileMaxDiskUsage from 80 to 20. 80%!! Seriously. Who wants a utility to take 80% of the disk space! Even at 20%, with a 125GB drive, I should be able to go up to 25GB. The ETL file was 14MB and I started getting errors saying the threshold
was reached!
Then, I could not stop the scenario. I tried 3 times. Either it would keep running or I got some weird error. I finally spelled AlwaysOn with the caps like it was case sensitive and it worked. This utility is whacked. Maybe I am doing something wrong.
According to MS article CacheFileMaxDiskUsage is Defined as the percentage of disk space that can be used by the cache files. So, 20 for this value means 20% of 125GB or if its talking about free disk space, 18GB in my case. Below is the error I am
getting: 90,479.939.584 is the amount of free space on the disk. I did do the search again and it did work this time. I restarted the Agent on all FE servers. If I can get around this threshold error I think I am in business.
Lync Server Centralized Logging Service Agent Service reached the local disk usage threshold and no network share is configured
EtlFileFolder: c:\temp\tracing - 90,479,939,584 (67.47 %)
CacheFileLocalMaxDiskUsage: 20 %
CacheFileLocalFolders:
c:\temp\tracing - 90,479,939,584 (67.47 %)
CacheFileNetworkFolder: Not set
Cause: Lync Server Centralized Logging Service Agent Service will stop tracing when the local disk usage threshold is reached and no network share is configured. Verify CLS configuration using Get-CsCentralizedLoggingConfiguration. Necessary scenarios will
need to be re-enabled once the more space is made available locally or a network share is configured
Resolution:
Free up local disk space, or increase disk usage threshold for CLS, or configure network share with write permissions for Network Service account -
Hi,
I would like to know whether SPM tables are transportable? or do we need to open the client every time you update tables?
Thanks
SamHi Ahmed,
You can download and upload all configurations within Backend /n/virsa/zvfat tcode itself.
Goto Utilities> Download and take the desired dump, you will have to do one by one all. Next you upload it in QA and Prod server in the same place - Utilities> Upload
Just two things might be different in each system - RFC and connector, make sure to change them manually after upload and it will work fine.
FFIDs are to be created manually and Roles are to be transported and attached.
Regards,
Sabita -
Discoverer 3i server trace yields empty log file
Has anyone had issues with setting up a Discoverer 3i trace on the server? Our viewer shows version 3.3.62.02. I have tried Registry entries under 'HKEY_CURRENT_USER\Software\Oracle\Discoverer 3.1' (per the documentation) and also in 'Discoverer 3.3', in case it thought that was the version. I have varied file names and parameters, stopping and restarting the Discoverer server service each time (and sometimes rebooting, just to be sure).
Each time I enter Viewer, it creates a new 'Discoverer.log' file in 'Winnt/System32' of zero bytes, whether or not I indicate that is supposed to be the file name, but never writes anything into it, whether the workbook works correctly or not. I have done this before on a workstation version with no problems.
Am I missing something on the server side?
Thanks,
RonHas anyone had issues with setting up a Discoverer 3i trace on the server? Our viewer shows version 3.3.62.02. I have tried Registry entries under 'HKEY_CURRENT_USER\Software\Oracle\Discoverer 3.1' (per the documentation) and also in 'Discoverer 3.3', in case it thought that was the version. I have varied file names and parameters, stopping and restarting the Discoverer server service each time (and sometimes rebooting, just to be sure).
Each time I enter Viewer, it creates a new 'Discoverer.log' file in 'Winnt/System32' of zero bytes, whether or not I indicate that is supposed to be the file name, but never writes anything into it, whether the workbook works correctly or not. I have done this before on a workstation version with no problems.
Am I missing something on the server side?
Thanks,
Ron -
Steps to empty SAPDB (MaxDB) log file
Hello All,
i am on Redhat Unix Os with NW 7.1 CE and SAPDB as Back end. I am trying to login but my log file is full. Ii want to empty log file but i havn't done any data backup yet. Can anybody guide me how toproceed to handle this problem.
I do have some idea what to do like the steps below
1. take databackup (but i want to skip this step if possible) since this is a QA system and we are not a production company.
2. Take log backup using same methos as data backup but with Log type (am i right or there is somethign else)
3. It will automatically overwrite log after log backups.
or should i use this as an alternative, i found this in note Note 869267 - FAQ: SAP MaxDB LOG area
Can the log area be overwritten cyclically without having to make a log backup?
Yes, the log area can be automatically overwritten without log backups. Use the DBM command
util_execute SET LOG AUTO OVERWRITE ON
to set this status. The behavior of the database corresponds to the DEMO log mode in older versions. With version 7.4.03 and above, this behavior can be set online.
Log backups are not possible after switching on automatic overwrite. Backup history is broken down and flagged by the abbreviation HISTLOST in the backup history (dbm.knl file). The backup history is restarted when you switch off automatic overwrite without log backups using the command
util_execute SET LOG AUTO OVERWRITE OFF
and by creating a complete data backup in the ADMIN or ONLINE status.
Automatic overwrite of the log area without log backups is NOT suitable for production operation. Since no backup history exists for the following changes in the database, you cannot track transactions in the case of recovery.
any reply will be highly appreciated.
Thanks
ManiHello Mani,
1. Please review the document u201CUsing SAP MaxDB X Server Behind a Firewallu201D at MAXDB library
http://maxdb.sap.com/doc/7_7/44/bbddac91407006e10000000a155369/content.htm
u201CTo enable access to X Server (and thus the database) behind a firewall using a client program such as Database Studio, open the necessary ports in your firewall and restrict access to these ports to only those computers that need to access the database.u201D
Is the database server behind a Firewall? If yes, then the Xserver port need to be open. You could restrict access to this port to the computers of your database administrators, for example.
Is "nq2host" the name of the database server? Could you ping to the server "nq2host" from your machine?
2. And if the database server and your PC in the local area NetWork you could start the x_server on the database server & connect to the database using the DB studio on your PC, as you already told by Lars.
See the document u201CNetwork Communicationu201D at
http://maxdb.sap.com/doc/7_7/44/d7c3e72e6338d3e10000000a1553f7/content.htm
Thank you and best regards, Natalia Khlopina
Maybe you are looking for
-
Agregar Código de Barra desde DI API (Add-ON)
Hola a todos. Estoy desarrollando un ADD-On para la versión 9 y estoy teniendo un problema para agregar el código de barras de un producto. En esta nueva versión el código de barras tiene que estar creado previamente en la tabla OBCD ANTES de asignar
-
How to create child iTunes account without credit card
I Have my check card on file that can be used as debit or credit but iTunes says debit card on file. How can I create my child's account. I don't have a credit card
-
Header Condition for Purchase Order Services
Hi All, It´s possible to add a header/item condition at service level, that will be charged in total at the first service entry sheet for that service? Thanks in advance, Tiago Ferreira
-
How to have a check on change bars and/or tags before generating a PDF
Hi All, There were many instances where I have generated very huge books (over 1300 pgs), and realize that some chapter(s) (files) are printed along with the tags or the change bars on. To resolve this, I had to go back and regenerate the same book a
-
When trying to open documents, getting error as File is not a valid Win32 application
Hi, I am trying to open a MPP by clicking on the name of the file. But I am getting the error as "path\filename.mpp is not a valid Wind32 application". I am able to save the document and open it in MPP, but clicking on the document name is giving thi