CVP log file showing port utilization
Hello,
I need to know where I can find in CVP VXML server the log file that shows VXML ports utilization.
Anyone knows which file is that?
Thank you,
Sahar Hanna
There is folder called GlobalLogger in VXML server in that call_logYYYY-MM-DD.txt files..If you canvert this file to CSV you can see VXML port utilization for all VXML applications in use.
Logfile path..
C:\Cisco\CVP\VXMLServer\logs\GlobalCallLogger
If this helps ..please rate it.
Bhushan.
Similar Messages
-
New Log Files Not 100% Utilization
I just recently deployed my web app with new configurations to require my BDB to have at least 65% disk utilization (up from the default 50%) and 25% minimum file utilization (up from the default 5%). On start of my app, I temporarily coded an env.cleanLog() to force a clean of the logs to bring the DB up to these utilization parameters.
I've deployed the app, and its been chugging away at the clean process now for about 2.5 hours. It is generating a new 100 MB log file approximately every minute as it is attempting to compact the DB. However, at 2.5 hours, I just ran DbSpace and the DB utilization is only up to 51%. Perhaps most surprising are the following two facts:
1) The newly created log files from the cleaner are nowhere near 100% utilization
2) Some of the newly created log files have already now been cleaned (deleted) themselves.
There are 0 updates happening to the BDB while this process is running. My understanding was that when the cleaner removed old files and created new files, only good/valid data is written to the new files. However, this doesn't seem to be the case. Can someone enlighten me? I'm happy to read docs if someone can point me in the right direction.
Thanks.Sorry, somehow I missed that you weren't doing any updates, even though you said so. I apologize.
However, your cache size is much too small for your data set. Running DbCacheSize with a rough approximation of your data set size gives:
~/je.cvs$ java -jar build/lib/je.jar DbCacheSize -records 50000000 -key 30 -data 300
Inputs: records=50000000 keySize=30 dataSize=300 nodeMax=128 binMax=128 density=80% overhead=10%
=== Cache Sizing Summary ===
Cache Size Btree Size Description
3,543,168,702 3,188,851,832 Minimum, internal nodes only
3,963,943,955 3,567,549,560 Maximum, internal nodes only
22,209,835,368 19,988,851,832 Minimum, internal nodes and leaf nodes
22,630,610,622 20,367,549,560 Maximum, internal nodes and leaf nodes
=== Memory Usage by Btree Level ===
Minimum Bytes Maximum Bytes Nodes Level
3,157,713,227 3,532,713,035 488,281 1
30,834,656 34,496,480 4,768 2
297,482 332,810 46 3
6,467 7,235 1 4Apparently when you started, the cleaner was "behind" -- utilization was low. Without enough cache to hold the internal nodes -- 3.5GB according to DbCacheSize -- the cleaner will either take a very long time or never be able to clean up to 50% utilization.
Do you really only have that much memory available, or is this just a dev environment issue?
--mark -
Trans.log file showing 'ERROR = CONNECT failed with sql error 12541'
Dear All
We have upgraded our datatabse from Oracle 9.2.0.8 to Oracel 10.2.0.4 on our ECC 5.0 IS-AFS SAP system. When we check version from SQL, it's showing 10.2.0.4. That means database is successfully upgraded. Now, when we login through our <SID>adm user and checking R3trans -d, it is showing
'This is R3trans version 6.13 (release 640 - 07.01.08 - 14:25:00).
unicode enabled version
2EETW169 no connect possible: "DBMS = ORACLE --- dbs_ora_tnsname = 'V
AP'"
R3trans finished (0012).'
In the trans.log file it is showing,
4 ETW000 R3trans version 6.13 (release 640 - 07.01.08 - 14:25:00).
4 ETW000 unicode enabled version
4 ETW000 ===============================================
4 ETW000
4 ETW000 date&time : 22.06.2009 - 11:56:40
4 ETW000 control file: <no ctrlfile>
4 ETW000 R3trans was called as follows: R3trans -d
4 ETW000 trace at level 2 opened for a given file pointer
4 ETW000 [dev trc ,00000] Mon Jun 22 11:56:40 2009 110 0.000
110
4 ETW000 [dev trc ,00000] db_con_init called 27 0.000
137
4 ETW000 [dev trc ,00000] create_con (con_name=R/3) 62 0.000
199
4 ETW000 [dev trc ,00000] Loading DB library '/usr/sap/VAP/SYS/exe/run/dboraslib.o' ...
4 ETW000 77 0.000
276
4 ETW000 [dev trc ,00000] load shared library (/usr/sap/VAP/SYS/exe/run/dboraslib.o), hdl
0
4 ETW000 8142 0.008
418
4 ETW000 [dev trc ,00000] Library '/usr/sap/VAP/SYS/exe/run/dboraslib.o' loaded
4 ETW000 43 0.008
461
4 ETW000 [dev trc ,00000] function DbSlExpFuns loaded from library /usr/sap/VAP/SYS/exe/r
un/dboraslib.o
4 ETW000 61 0.008
522
4 ETW000 [dev trc ,00000] Version of '/usr/sap/VAP/SYS/exe/run/dboraslib.o' is "640.00",
patchlevel (0.276)
4 ETW000 421 0.008
943
4 ETW000 [dev trc ,00000] function dsql_db_init loaded from library /usr/sap/VAP/SYS/exe/
run/dboraslib.o
4 ETW000 46 0.008
989
4 ETW000 [dev trc ,00000] function dbdd_exp_funs loaded from library /usr/sap/VAP/SYS/exe
/run/dboraslib.o
4 ETW000 75 0.009
064
4 ETW000 [dev trc ,00000] New connection 0 created 48 0.009
112
4 ETW000 [dev trc ,00000] 0: name = R/3, con_id = -000000001 state = DISCONNECTED, perm =
YES, reco = NO , timeout = 000, con_max = 255, con_opt = 255, occ = NO
4 ETW000 53 0.009
165
4 ETW000 [dev trc ,00000] db_con_connect (con_name=R/3) 55 0.009
220
4 ETW000 [dev trc ,00000] find_con_by_name found the following connection for reuse:
4 ETW000 53 0.009
273
4 ETW000 [dev trc ,00000] 0: name = R/3, con_id = 000000000 state = DISCONNECTED, perm =
YES, reco = NO , timeout = 000, con_max = 255, con_opt = 255, occ = NO
4 ETW000 49 0.009
322
4 ETW000 [dev trc ,00000] Got ORACLE_HOME=/oracle/VAP/102_64 from environment
4 ETW000 521 0.009
843
4 ETW000 [dev trc ,00000] -->oci_initialize (con_hdl=0) 84 0.009
927
4 ETW000 [dev trc ,00000] got NLS_LANG='AMERICAN_AMERICA.UTF8' from environment
4 ETW000 65 0.009
992
4 ETW000 [dev trc ,00000] Client NLS settings: AMERICAN_AMERICA.UTF8 2033 0.012
025
4 ETW000 [dev trc ,00000] Logon as OPS$-user to get SAPVAP's password 49 0.012
074
4 ETW000 [dev trc ,00000] Connecting as /@VAP on connection 0 (nls_hdl 0) ... (dbsl 640 1
80309)
4 ETW000 57 0.012
131
4 ETW000 [dev trc ,00000] Nls CharacterSet NationalCharSet C
EnvHp ErrHp ErrHpBatch
4 ETW000 49 0.012
180
4 ETW000 [dev trc ,00000] 0 UTF8 1
0x1113a4400 0x1113b6b00 0x1113b63b8
4 ETW000 58 0.012
238
4 ETW000 [dev trc ,00000] Allocating service context handle for con_hdl=0 48 0.012
286
4 ETW000 [dev trc ,00000] Allocating server context handle 42 0.012
328
4 ETW000 [dev trc ,00000] Attaching to DB Server VAP (con_hdl=0,svchp=0x1113b97d8,srvhp=0
x1113b99f8)
4 ETW000 95 0.012
423
4 ETW000 [dboci.c ,00000] *** ERROR => OCI-call 'OCIServerAttach' failed: rc = 12541
4 ETW000 4556 0.016
979
4 ETW000 [dbsloci. ,00000] *** ERROR => CONNECT failed with sql error 12541.
4 ETW000 59 0.017
038
4 ETW000 [dev trc ,00000] set_ocica() -> OCI or SQL return code 12541 39 0.017
077
4 ETW000 [dev trc ,00000] Try to connect with default password 184 0.017
261
4 ETW000 [dev trc ,00000] Connecting as SAPVAP/<pwd>@VAP on connection 0 (nls_hdl 0) ...
(dbsl 640 180309)
4 ETW000 46 0.017
307
4 ETW000 [dev trc ,00000] Nls CharacterSet NationalCharSet C
EnvHp ErrHp ErrHpBatch
4 ETW000 48 0.017
355
4 ETW000 [dev trc ,00000] 0 UTF8 1
0x1113a4400 0x1113b6b00 0x1113b63b8
4 ETW000 48 0.017
403
4 ETW000 [dev trc ,00000] server_detach(con_hdl=0,stale=0,svrhp=0x1113b99f8)
4 ETW000 54 0.017
457
4 ETW000 [dev trc ,00000] Detaching from DB Server (con_hdl=0,svchp=0x1113b97d8,srvhp=0x1
113b99f8)
4 ETW000 54 0.017
511
4 ETW000 [dev trc ,00000] Deallocating server context handle 0x1113b99f8 42 0.017
553
4 ETW000 [dev trc ,00000] Allocating server context handle 33 0.017
586
4 ETW000 [dev trc ,00000] Attaching to DB Server VAP (con_hdl=0,svchp=0x1113b97d8,srvhp=0
x1113b99f8)
4 ETW000 54 0.017
640
4 ETW000 [dboci.c ,00000] *** ERROR => OCI-call 'OCIServerAttach' failed: rc = 12541
4 ETW000 2547 0.020
187
4 ETW000 [dbsloci. ,00000] *** ERROR => CONNECT failed with sql error 12541.
4 ETW000 45 0.020
232
4 ETW000 [dev trc ,00000] set_ocica() -> OCI or SQL return code 12541 28 0.020
260
4 ETW000 [dblink ,00428] ***LOG BY2=>sql error 12541 performing CON [dblink#3 @ 428]
4 ETW000 220 0.020
480
4 ETW000 [dblink ,00428] ***LOG BY0=>ORA-12541: TNS:no listener [dblink#3 @ 428]
4 ETW000 42 0.020
522
2EETW169 no connect possible: "DBMS = ORACLE --- dbs_ora_tnsname = 'V
AP'"
Checked the listener service. It is already started and status is OK. Please help.
Regards
Shashank ShekharHi
Yes, I am trying it with orasid user also. There in trans.log file it is showing:
'4 ETW000 R3trans version 6.13 (release 640 - 07.01.08 - 14:25:00).
4 ETW000 unicode enabled version
4 ETW000 ===============================================
4 ETW000
4 ETW000 date&time : 22.06.2009 - 15:54:42
4 ETW000 control file: <no ctrlfile>
4 ETW000 R3trans was called as follows: R3trans -d
4 ETW000 trace at level 2 opened for a given file pointer
4 ETW000 [dev trc ,00000] Mon Jun 22 15:54:42 2009 109 0.000
109
4 ETW000 [dev trc ,00000] db_con_init called 27 0.000
136
4 ETW000 [dev trc ,00000] create_con (con_name=R/3) 61 0.000
197
4 ETW000 [dev trc ,00000] Loading DB library '/usr/sap/VAP/SYS/exe/run/dboraslib.o' ...
4 ETW000 79 0.000
276
4 ETW000 [dev trc ,00000] load shared library (/usr/sap/VAP/SYS/exe/run/dboraslib.o), hdl
0
4 ETW000 8174 0.008
450
4 ETW000 [dev trc ,00000] Library '/usr/sap/VAP/SYS/exe/run/dboraslib.o' loaded
4 ETW000 43 0.008
493
4 ETW000 [dev trc ,00000] function DbSlExpFuns loaded from library /usr/sap/VAP/SYS/exe/r
un/dboraslib.o
4 ETW000 61 0.008
554
4 ETW000 [dev trc ,00000] Version of '/usr/sap/VAP/SYS/exe/run/dboraslib.o' is "640.00",
patchlevel (0.276)
4 ETW000 430 0.008
984
4 ETW000 [dev trc ,00000] function dsql_db_init loaded from library /usr/sap/VAP/SYS/exe/
run/dboraslib.o
4 ETW000 46 0.009
030
4 ETW000 [dev trc ,00000] function dbdd_exp_funs loaded from library /usr/sap/VAP/SYS/exe
/run/dboraslib.o
4 ETW000 76 0.009
106
4 ETW000 [dev trc ,00000] New connection 0 created 48 0.009
154
4 ETW000 [dev trc ,00000] 0: name = R/3, con_id = -000000001 state = DISCONNECTED, perm =
YES, reco = NO , timeout = 000, con_max = 255, con_opt = 255, occ = NO
4 ETW000 53 0.009
207
4 ETW000 [dev trc ,00000] db_con_connect (con_name=R/3) 57 0.009
264
4 ETW000 [dev trc ,00000] find_con_by_name found the following connection for reuse:
4 ETW000 54 0.009
318
4 ETW000 [dev trc ,00000] 0: name = R/3, con_id = 000000000 state = DISCONNECTED, perm =
YES, reco = NO , timeout = 000, con_max = 255, con_opt = 255, occ = NO
4 ETW000 49 0.009
367
4 ETW000 [dev trc ,00000] Got ORACLE_HOME=/oracle/VAP/102_64 from environment
4 ETW000 526 0.009
893
4 ETW000 [dev trc ,00000] -->oci_initialize (con_hdl=0) 84 0.009
977
4 ETW000 [dev trc ,00000] got NLS_LANG='AMERICAN_AMERICA.UTF8' from environment
4 ETW000 64 0.010
041
4 ETW000 [dboci.c ,00000] *** ERROR => OCI-call 'OCIEnvCreate(mode=16384)' failed: rc = -
1
4 ETW000 1464 0.011
505
4 ETW000 [dev trc ,00000] set_ocica() -> OCI or SQL return code -1 40 0.011
545
4 ETW000 [dboci.c ,00000] *** ERROR => OCI-call 'OCIErrorGet' failed: rc = -2
4 ETW000 58 0.011
603
4 ETW000 [dblink ,00428] ***LOG BY2=>sql error -1 performing CON [dblink#3 @ 428]
4 ETW000 74 0.011
677
4 ETW000 [dblink ,00428] ***LOG BY0=>Cannot get Oracle error text. [dblink#3 @ 428]
4 ETW000 55 0.011
732
2EETW169 no connect possible: "DBMS = ORACLE --- dbs_ora_tnsname = 'V
AP'"
Someone suggested 'The solution is to download the new oracle client dll into the kernel'. Can anyone tell from where i can get the client dll for the kernel.
Regards
Shashank Shekhar -
Concurrent manger log files shows
Hi ,
APPS-11.5.10.2
DB----10g(RAC 2node)
0S----HPUNIX
In concurrent manager log file it shows like this.....some times..please help me to reslove this.In this crmapp1 is application server.CRMPRD is database.
Adding Node:(CRMAPP1),Instance:(CRMPRD) to Unavailable list
Adding Node:(CRMAPP1),Instance:(CRMPRD) to Unavailable list
Rgs,
ramHi hussian,
I have checked note 271090.1.MY ICM is in CRMAPP1 and my STANDARD MANAGER is in ALCCCRMAPP.ashokleyland.com.Please let me know i have to apply any patch on this?
C:\Users\dbadmin.ale>ping crmapp1.ashokleyland.com
Pinging crmapp1.ashokleyland.com [10.1.225.31] with 32 bytes of data:
Reply from 10.1.225.31: bytes=32 time=84ms TTL=64
Reply from 10.1.225.31: bytes=32 time<1ms TTL=64
Reply from 10.1.225.31: bytes=32 time<1ms TTL=64
Reply from 10.1.225.31: bytes=32 time<1ms TTL=64
Ping statistics for 10.1.225.31:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 84ms, Average = 21ms
C:\Users\dbadmin.ale>ping alcccrmapp.ashokleyland.com
Pinging crmapp.ashokleyland.com [10.1.225.30] with 32 bytes of data:
Reply from 10.1.225.30: bytes=32 time=7ms TTL=64
Reply from 10.1.225.30: bytes=32 time<1ms TTL=64
Reply from 10.1.225.30: bytes=32 time<1ms TTL=64
Reply from 10.1.225.30: bytes=32 time<1ms TTL=64
Ping statistics for 10.1.225.30:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 7ms, Average = 1ms
C:\Users\dbadmin.ale>ping crmapp1.ashokleyland.com
Pinging crmapp1.ashokleyland.com [10.1.225.31] with 32 bytes of data:
Reply from 10.1.225.31: bytes=32 time=84ms TTL=64
Reply from 10.1.225.31: bytes=32 time<1ms TTL=64
Reply from 10.1.225.31: bytes=32 time<1ms TTL=64
Reply from 10.1.225.31: bytes=32 time<1ms TTL=64
Ping statistics for 10.1.225.31:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 84ms, Average = 21ms
C:\Users\dbadmin.ale>ping alcccrmapp.ashokleyland.com
Pinging crmapp.ashokleyland.com [10.1.225.30] with 32 bytes of data:
Reply from 10.1.225.30: bytes=32 time=7ms TTL=64
Reply from 10.1.225.30: bytes=32 time<1ms TTL=64
Reply from 10.1.225.30: bytes=32 time<1ms TTL=64
Reply from 10.1.225.30: bytes=32 time<1ms TTL=64
Ping statistics for 10.1.225.30:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 7ms, Average = 1ms
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PAR THREAD# ARCHIVE LOG_SWITCH_ LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST
2 CRMPRD2 crmdb002 10.1.0.5.0 16-FEB-13 OPEN YES 2 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL
SQL> select * from v$thread;
THREAD# STATUS ENABLED GROUPS INSTANCE OPEN_TIME CURRENT_GROUP# SEQUENCE# CHECKPOINT_CHANGE# CHECKPOINT_TIME ENABLE_CHANGE# ENABLE_TIME DISABLE_CHANGE# DISABLE_TIME
1 OPEN PUBLIC 4 CRMPRD1 17-FEB-13 2 49444 1.0738E+11 19-FEB-13 3.9187E+10 19-OCT-08 0
2 OPEN PRIVATE 4 CRMPRD2 17-FEB-13 7 63249 1.0738E+11 19-FEB-13 3.9187E+10 19-OCT-08 0
Rgs,
Ram -
Alert log file show me this error
Alert<DB Name>.log
=================
Tue Aug 31 12:13:00 2010
Errors in file /u01/app/oracle/admin/<DB Name>/udump/<Instance Name>ora5520.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-12663: Services required by client not available on the server
ORA-36961: Oracle OLAP is not available.
ORA-06512: at "SYS.OLAPIHISTORYRETENTION", line 1
ORA-06512: at line 15
Tue Aug 31 12:13:00 2010
Completed: ALTER DATABASE OPEN
Tue Aug 31 12:13:00 2010
db_recovery_file_dest_size of 3072 MB is 0.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
==================
<DB Name>ora5520.trc file
=================
/u01/app/oracle/admin/<DB Name>/udump/<DB Name>ora5520.trc
Oracle Database 10g Release 10.2.0.3.0 - Production
ORACLE_HOME = /u01/app/oracle/product/10.2.0
System name: Linux
Node name: <DB Name>-invdev-ora-1
Release: 2.6.18-128.el5
Version: #1 SMP Wed Dec 17 11:42:39 EST 2008
Machine: i686
Instance name: <Instance Name>
Redo thread mounted by this instance: 1
Oracle process number: 15
Unix process pid: 5520, image: oracle@<DB Name>-invdev-ora-1 (TNS V1-V3)
*** SERVICE NAME:() 2010-08-31 12:12:59.130
*** SESSION ID:(159.3) 2010-08-31 12:12:59.130
Thread 1 checkpoint: logseq 2033, block 78039, scn 67047227
cache-low rba: logseq 2033, block 80480
on-disk rba: logseq 2033, block 80921, scn 67048897
start recovery at logseq 2033, block 80480, scn 0
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00313: open failed for members of log group 1 of thread 1
----- Redo read statistics for thread 1 -----
Read rate (ASYNC): 220Kb in 0.38s => 0.57 Mb/sec
Total physical reads: 4096Kb
Longest record: 2Kb, moves: 0/748 (0%)
Longest LWN: 48Kb, moves: 0/17 (0%), moved: 0Mb
Last redo scn: 0x0000.03ff15c0 (67048896)
----- Recovery Hash Table Statistics ---------
Hash table buckets = 32768
Longest hash chain = 1
Average hash chain = 52/52 = 1.0
Max compares per lookup = 1
Avg compares per lookup = 1442/1494 = 1.0
*** 2010-08-31 12:12:59.517
KCRA: start recovery claims for 52 data blocks
*** 2010-08-31 12:12:59.640
KCRA: blocks processed = 52/52, claimed = 52, eliminated = 0
ORA-00313: open failed for members of log group 1 of thread 1
*** 2010-08-31 12:12:59.641
Recovery of Online Redo Log: Thread 1 Group 1 Seq 2033 Reading mem 1
----- Recovery Hash Table Statistics ---------
Hash table buckets = 32768
Longest hash chain = 1
Average hash chain = 52/52 = 1.0
Max compares per lookup = 1
Avg compares per lookup = 1494/1494 = 1.0
ORA-00313: open failed for members of log group 1 of thread 1
Error in executing triggers on database startup
*** 2010-08-31 12:13:00.526
ksedmp: internal or fatal error
ORA-00604: error occurred at recursive SQL level 1
ORA-12663: Services required by client not available on the server
ORA-36961: Oracle OLAP is not available.
ORA-06512: at "SYS.OLAPIHISTORYRETENTION", line 1
ORA-06512: at line 15
Please guys can any one tell me what's prob.
It's Oracle 10G R2 Linux Machine. also it's development environment.Hi,
Look at the thread
Database startup error message
Regards, -
Error State Id 11756 and Client side Scan Agent.log file shows with Error=0x8024400d
We 1500 clients which managed through SCCM 2007. out of 1500 workstations 300 are showing failed to install updates with following error code from scanagent.log. Also the error state messge id is 11756
-Scan Failed for ToolUniqueID={CACC0F54-E6B6-40AA-8BCD-81A1C7BE2918}, with Error=0x8024400d
Error From WUAHanlder.log
OnSearchComplete - Failed to end search job. Error = 0x8024400d.
I searched over google for this issue and found some thing related with Group policy, but there is no exact cause and solution found for this. Could you please some one help me on this.1. On the affected machine, disable the SCCM Agent. To do this, you can run the following commands:
Disable the Service --> sc config CcmExec start= disabled
Stop the Service net stop CcmExec
2. Ensure that the following policy is not enforced on the system:
User Configuration\Administrative Templates\Windows Components\Windows Update\Remove access to use all Windows Update Features
Check this first in the local system policy (you can pull this up using gpedit.msc – Local Group Policy Editor). After that, please run RSOP.msc and ensure that the policy is not configured either. This will give you information from domain policies too.
If the policy is enabled please either remove the policy or disable it.
3. Restart the Automatic Updates service.
4. Now, from the command line, run the following command:
Configure Proxy proxycfg.exe –p “WSUS SERVER FQDN”
By doing this, we are configuring WinHTTP so that server access in upper case is also bypassed.
At this point, we need to test an update scan. Since the SMS Host Agent service is disabled and stopped, we won’t be able to use the agent to run the scan. In this case, we would need to run a scan using the command below:
wuauclt /resetauthorization /detectnow
Check Windowsupdate.log for the outcome of the testing
How to Bypass Proxy server for testing purpose using proxycfg untility. (More details http://msdn.microsoft.com/en-us/library/windows/desktop/ms761351(v=vs.85).aspx). Also find the registry entries you can check for bypass list – “HKEY_LOCAL_MACHINE\
SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\Connections\” .
RanzHat -
Interface Trip Stop Run - Log file showing an Error
Hi
When I run Interface Trip Stop Run - SRS, Request completed successfully but it has the following error. " Failed GMI_Shipping_Util.GMI.UPDATE_SHIPMENT_TXN for Delivery Id 38237"
In Sales Order the Line Status is Closed.
In Shipping Transactions Form the Status is Showing Shipped.Next Step it Run Interfaces.
When I checked the Lot Qty in Lot Genealogy the on hand qty of that lot is Zero.
And the transaction is showing Pending in Item Transaction Inquiry.
Please tell me how to proceed with this sales order.
Thanks in Advance
Prem.Its a concurrent program.
Submit a request with name: Interface Trip Stop - SRS.
i have tested and this will resolve your query too. -
Log file shows all devices with same MAC on EA6500
Just thought it was rather odd. I'm running firmware version 1.1.28.14856 on EA6500 series.
ThanksThe 6500 has numerous issues that is one of them, cisco knows of the issues but really has done nothing to fix them. I would stay far far away from the 6500 it really is broke right out of the box. If you search the forums here and other reviews on the web of the 6500 it's not pretty. The router was rushed to market without the propper testing and cisco decided to make you the customer test it for them. Good Luck with the 6500. And if the moderators in here dont like me telling the truth than simply delete my account here dont matter none to me. Cisco knows im right and so do many others.
-
I have written an application that has a Primary and Secondary database. The application creates tens-of-thousands of records in the Primary database, with a 1-to-1 relationship in the Secondary. On subsequent runs it will either update existing Primary records (which should not update the secondary as that element does not change) or it will create new records.
The application actually works correctly, with the right data, the right updates and the right logical processing. The problem is the log files.
The input data I am testing with is originally 2Mb as a CSV file and with a fresh database it creates almost 20Mb of data. This is about right for the way it splits the information up and indexes it. If I run the application again with exactly the same data, it should just update all the entries and create nothing new. My understanding is that the updated records will be written to the end of the logs, and the old ones in the earlier logs would be redundant and the cleaner thread would clean them up. I am explicitly cleaning as per the examples. The issue is that each run, the data just doubles in size! Logically it is fine, physically it is taking a ridiculous amount of space. RUnning DbSpace shows that the logs are mostly full (over 90%) where I would expect most to be empty, or sparsely occupied as the new updates are written to new files. cleanLog() does nothing. I am at a total loss!
Generally the processing I am doing on the primary is looking up the key, if it is there updating the entry, if not creating one. I have been using a cursor to do this, and using the putCurrent() method for existing updates, and put() for new records. I have even tried using Database.delete() and the full put() in place of putCurrent() - but no difference (except it is slower).
Please help - it is driving me nuts!Let me provide a little more context for the questions I was asking. If this doesn't lead us further into understanding your log situation, perhaps we should take this offline. When log cleaning doesn't occur, the basic questions are:
a. is the application doing anything that prohibits log cleaning? (in your case, no)
b. has the utilization level fallen to the point where log cleaning should occur? (not on the second run, but it should on following runs)
c. does the log utilization level match what the application expects? (no, it doesn't match what you expect).
1) Ran DbDump with and withour -r. I am expecting the
data to stay consistent. So, after the first run it
creates the data, and leaves 20mb in place, 3 log
files near 100% used. After the second run it should
update the records (which it does from the
applications point of view) but I now have 40mb
across 5 log files all near 100% usage.I think that it's accurate to say that both of us are not surprised that the second run (which updates data but does not change the number of records) creates a second 20MB of log, for a total of 40MB. What we do expect though, is that the utilization reported by DbSpace should fall to closer to 50%. Note that since JE's default minimum utilization level is 50%, we don't expect any automatic log cleaning even after the second run.
Here's the sort of behavior we'd expect from JE if all the basics are taken care of (there are enough log files, there are no open txns, the application stays up long enough for the daemon to run, or the application does batch cleanLog calls itself, etc).
run 1 - creates 20MB of log file, near 100% utilization, no log cleaning
run 2 - updates every record, creates another 20MB of log file, utilization falls, maybe to around 60%. No log cleaning yet, because the utilization is still above the 50% threshold.
run 3 - updates every record, creates another 20MB of log file, utilization falls below 50%, log cleaning starts running, either in the background by the daemon thread, or because the app calls Environment.cleanLog(), without any need to set je.cleaner.forceCleanFiles.
So the question here is (c) from above -- you're saying that your DbSpace utilization level doesn't match what you believe your application is doing. There are three possible answers -- your application has a bug :-), or with secondaries and whatnot, JE is representing your data in a fashion you didn't expect, or JE's disk space utilization calculation is inaccurate.
I suggested using DbDump -r as a first sanity check of what data your application holds. It will dump all the valid records in the environment (though not in key order, no -r is slower, but dumps in key order). Keys and data should up on different lines, so the number of lines in the dump files should be twice the number of records in the environment. You've done this already in your application, but this is an independent way of checking. It also makes it easier to see what portion of data is in primary versus secondary databases, because the data is dumped into per-database files. You could also load the data into a new, blank environment to look at it.
I think asked you about the size of your records because a customer recently reported a JE disk utilization bug, which we are currently working on. It turns out that if your data records are very different in size (in this case, 4 orders of magnitude) and consistently only the larger or the smaller records are made obsolete, the utilization number gets out of whack. It doesn't really sound like your situation, because you're updating all your records, and they don't sound like they're that different in size. But nevertheless, here's a way of looking at what JE thinks your record sizes are. Run this command:
java -jar je.jar DbPrintLog -h <envhome> -S
and you'll see some output that talks about different types of log entries, and their sizes. Look at the lines that say LN and LN_TX at the top. These are data records. Do they match the sizes you expect? These lines do include JE's per-record headers. How large that is depends on whether your data is transactional or not. Non-transactional data records have a header of about 35 bytes, whereas transactional data records have 60 bytes added to them. If your data is small, that can be quite a large percentage. This is quite a lot more than for BDB (Core), partly because BDB (Core) doesn't have record level locking, and partly because we store a number of internal fields as 64 bit rather than 16 or 32 bit values.
The line that's labelled "key/data" shows what portion JE thinks is the application's data. Note that DbPrintLog, unlike DbSpace, doesn't account for obsoleteness, so while you'll see a more detailed picture of what the records look like in the log, you may see more records than you expect.
A last step we can take is to send you a development version of DbSpace that has a new feature to recalculate the utilization level. It runs more slowly than the vanilla DbSpace, but is a way of double checking the utilization level.
In my first response, I suggested trying je.cleaner.forceCleanFiles just to make it clear that the cleaner will run, and to see if the problem is really around the question of what the utilization level should be. Setting that property lets the cleaner bypass the utilization trigger. If using it really reduced the size of your logs, it reinforces that your idea of what your application is doing is correct, and casts suspicion on the utilization calculation.
So in summary, let's try these steps
- use DbDump and DbPrintLog to double check the amount and size of your application data
- make a table of runs, that shows the log size in bytes, number of log files, and the utilization level reported by DbSpace
- run a je.cleaner.forceCleanFiles cleanLog loop on one of the logs that seems to have a high utilization level, and see how much it reduces to, and what the resulting utilization level is
If it all points to JE, we'll probably take it offline, and ask for your test case.
Regards,
Linda -
Unable to debug the Data Template Error in the Log file
Hi,
I am unable to debug the log file error message Please can anybody explain me in detail where the error lies and how to solve the error.The log file shows the following message.
XDO Data Engine ver 1.0
Resp: 50554
Org ID : 204
Request ID: 2865643
All Parameters: USER_ID=1318:REPORT_TYPE=Report Only:P_SET_OF_BOOKS_ID=1:TRNS_STATUS=Posted:P_APPROVED=Not Approved:PERIOD=Sep-05
Data Template Code: ILDVAPDN
Data Template Application Short Name: CLE
Debug Flag: Y
{TRNS_STATUS=Posted, REPORT_TYPE=Report Only, PERIOD=Sep-05, USER_ID=1318, P_SET_OF_BOOKS_ID=1, P_APPROVED=Not Approved}
Calling XDO Data Engine...
java.lang.NullPointerException
at oracle.apps.xdo.dataengine.DataTemplateParser.getObjectVlaue(DataTemplateParser.java:1424)
at oracle.apps.xdo.dataengine.DataTemplateParser.replaceSubstituteVariables(DataTemplateParser.java:1226)
at oracle.apps.xdo.dataengine.XMLPGEN.writeData(XMLPGEN.java:398)
at oracle.apps.xdo.dataengine.XMLPGEN.writeGroupStructure(XMLPGEN.java:281)
at oracle.apps.xdo.dataengine.XMLPGEN.processData(XMLPGEN.java:251)
at oracle.apps.xdo.dataengine.XMLPGEN.processXML(XMLPGEN.java:192)
at oracle.apps.xdo.dataengine.XMLPGEN.writeXML(XMLPGEN.java:222)
at oracle.apps.xdo.dataengine.DataProcessor.processData(DataProcessor.java:334)
at oracle.apps.xdo.oa.util.DataTemplate.processData(DataTemplate.java:236)
at oracle.apps.xdo.oa.cp.JCP4XDODataEngine.runProgram(JCP4XDODataEngine.java:272)
at oracle.apps.fnd.cp.request.Run.main(Run.java:148)
Start of log messages from FND_FILE
Start of After parameter Report Trigger Execution..
Gl Set of Books.....P
Organization NameVision Operations
Entering TRNS STATUS POSTED****** 648Posted
end of the trns status..687 Posted
currency_code 20USD
P_PRECISION 272
precision 332
GL NAME 40Vision Operations (USA)
Executing the procedure get format ..
ExecutED the procedure get format and the Result..
End of Before Report Execution..
End of log messages from FND_FILE
Executing request completion options...
------------- 1) PUBLISH -------------
Beginning post-processing of request 2865643 on node AP615CMR at 28-SEP-2006 07:58:26.
Post-processing of request 2865643 failed at 28-SEP-2006 07:58:38 with the error message:
One or more post-processing actions failed. Consult the OPP service log for details.
Finished executing request completion options.
Concurrent request completed
Current system time is 28-SEP-2006 07:58:38
Thanks & Regards
Suresh SinghGenerally the DBAs are aware of the OPP service log. They can tell you the cause of the problem.
Anyway, how did you resolve the issue? -
How do I Create a Log File?
Hi All
Been up for a while trying to figure this out with no luck.
I created an app that will uninstall a program and all of it's files.
example
try
do shell script "rm -rf /Applications/TestFakeApp"
end try
try
do shell script "rm -rf /Applications/TestFakeApp2"
end try
try
do shell script "rm -rf ~/Library/Preferences/com.FakeTestApp.plist"
end try
try
do shell script "rm -rf ~/Library/Preferences/com.FakeTestApp2.plist"
end try
try
do shell script "rm -rf ~/Library/Logs/FakeTestApp*"
end try
try
do shell script "rm -rf ~/Library/Application\\ Support/FakeTestApp"
end try
there are alot more paths to remove but this is just a few for example
I want to be able to create a log.txt file on the desktop to show what has been removed and or what could not be removed.
I then tried by creating a text document by using
do shell script "touch ~/Desktop/test.txt"
tell application "Finder"
open file ((path to desktop folder as text) & "test.txt") using ((path to applications folder as text) & "TextEdit.app")
end tell
but I don't know what to do next.
1. Have it check for each file to see if it was deleted or not
2. add it into the test.txt file
3. save the file once done
Any help would be appriciated.Your version of it was simpler for me to understand..
Thank you
But having 1 issue with it.
I added some lines of non existing files to test if it did not delete them.
the log file showed they were deleted, but the files were never there to begin with.
I also added a return after each one write section so in the log file they would not be on the same line.
try
do shell script "sudo -v" password "" with administrator privileges
end try
do shell script "touch ~/Desktop/my.txt"
delay 2
set myLog to open for access file ((path to desktop as text) & "my.txt") with write permission
set eof myLog to 0 -- reset the file contents... eliminate this line if you want to append to an existing log
delay 2
write "this is a test" & return to myLog
delay 1
write "Trying to delete Test
" to myLog
delay 1
try
do shell script "rm -rf /Test" with administrator privileges
write "Deleted Test
" to myLog -- this will only execute if the above line doesn't trigger an error
on error
write "Error deleting Test
" to myLog
end try
delay 1
write "Trying to delete test2
" to myLog
try
do shell script "rm -rf /test2" with administrator privileges
write "Deleted Test2
" to myLog -- this will only execute if the above line doesn't trigger an error
on error
write "Error deleting test2
" to myLog
end try
delay 1
write "Trying to delete test3
" to myLog
try
do shell script "rm -rf /test3*
" with administrator privileges
write "Deleted Test2" to myLog -- this will only execute if the above line doesn't trigger an error
on error
write "Error deleting test3
" to myLog
end try
delay 1
write "Trying to delete test4
" to myLog
try
do shell script "rm -rf /test4" with administrator privileges
write "Deleted Test4
" to myLog -- this will only execute if the above line doesn't trigger an error
on error
write "Error deleting test4
" to myLog
end try
close access myLog -
Change in Oracle Parameters and Log file size
Hello All,
We have scheduled DB Check job and the log file showed few errors and warnings in the oracle parameter that needs to be corrected. We have also gone through the SAP Note #830576 Oracle Parameter Configuration to change these parameters accordingly. However we need few clarifications on the same.
1.Can we change these parameters directly in init<SID>.ora file or only in SP file. If yes can we edit the same and change it or do we need to change it using BR tools.
2.We have tried to change few parameters using DB26 tcode. But it prompts for maintaining the connection variables in DBCO tcode. We try to make change only in default database but it prompts for connection variables.
Also we get check point error. As per note 309526 can we create the new log file with 100MB size and drop the existing one. Or are there any other considerations that we need to follow for the size of log file and creating new log file. Kindly advise on this. Our Environment is as follows.
OS: Windows 2003 Server
DB: Oracle 10g
regards,
MadhuHi,
Madhu, We can change oracle parameters at both the levels that is init<SID> as well as SPFILE level.
1. If you do the changes at init<SID> level then you have to generate the SPFILE again and then database has to be restarted for the parameters to take effect.
If you make the changes in SPFILE then the parameters will take effect depending on the parameter type whether it is dynamic or static. You also need to generate the PFILE i.e init<SID>.ora
2. If possible do not change the oracle parameters using the tcode. I would say it would be better if you do it via the database and it would be much easier.
3. Well its always good to have a larger redo log size. But only one thing to keep in mind is that once you change the size of the redolog the size of the archive log also changes although the number of files will decrease.
Apart from that there wont be any issues.
Regards,
Suhas -
Attached is the output from Reports> System> Status> Log File.
As you can see I have a number of log files showing their size in RED.
How do I automate getting these smaller?
I have looked at Log Rotation, but none of the files listed in the attached appear in that section.
Is it a case of manaully stopping the Daemon, renaming the log files and then restarting the daemon, or is there a way of automating this housekeeping task?
Thanks
SteveHi Afroj,
Attached is all the files shown in Log Rotation, none of the ones in RED in my original post appear here.
Can I just add them then, and does the following indicate that I already have the job scheduled in which case at 6pm tonight it should then pick up the new ones I add?
C:\Program Files (x86)\CSCOpx\bin>C:\Windows\System32\at.exe
Status ID Day Time Command Line
1 Each M T W Th F S Su 6:00 PM C:\PROGRA~2\CSCOpx\objects\logrot\logrotsch.bat
2 Each M T W Th F S Su 4:30 PM C:\PROGRA~2\CSCOpx\conf\backupsch.bat
C:\Program Files (x86)\CSCOpx\bin> -
Hello,
We are using SUN One Web Server 6.0 SP4. We have setup a web instance with a secure and non-secure virtual server.
We want each of them to have their own log files. We have this configured and the log files show up for each (access, access-secure, errors and errors-secure). We also went in the 2 locations and shut off LogVsld and vsid. In the documentation it says to turn these ON if you want to use 1 log file for multiple virtual servers.
All entries however are only going into the access and errors files. Nothing is being logged in the *-secure log files.
Anyone else doing this?
ThanksHello,
We are using SUN One Web Server 6.0 SP4. We have
setup a web instance with a secure and non-secure
virtual server.
We want each of them to have their own log files. We
have this configured and the log files show up for
each (access, access-secure, errors and
errors-secure). We also went in the 2 locations and
shut off LogVsld and vsid. In the documentation it
says to turn these ON if you want to use 1 log file
for multiple virtual servers.
All entries however are only going into the access
and errors files. Nothing is being logged in the
*-secure log files.
Anyone else doing this?
Thanks Have the exact same problem. Seperate logs worked fine - until I added a secure certificate. Have not been able to seperate logs since.
Reported here for 6.0 SP2, but never saw a solution.
Interested if you have found solution?
Thanks -
I've encountered a problem with log file utilization during a somwhat long transaction during which some data is inserted in a StoredMap.
I've set the minUtilization property to 75%. During insertion, things seem to go smoothly, but at one point log files are created WAY more rapidly than what the amount of data would call for. The test involves inserting 750K entries for a total of 9Mb, the total size of log files is 359 Mb. Using DbSpace shows that the first few log files use approx 65% of their total space, but most only use 2%.
I understand that during a transaction, the Cleaner may not clean the log files involved. What I don't understand is why are most of the log files only using 2%:
File Size (KB) % Used
00000000 9763 56
00000001 9764 68
00000002 9765 68
00000003 9765 69
00000004 9765 69
00000005 9765 69
00000006 9765 68
00000007 9765 70
00000008 9764 68
00000009 9765 61
0000000a 9763 61
0000000b 9764 25
0000000c 9763 2
0000000d 9763 1
0000000e 9763 2
0000000f 9763 1
00000010 9764 2
00000011 9764 1
00000012 9764 2
00000013 9764 1
00000014 9764 2
00000015 9763 1
00000016 9763 2
00000017 9763 1
00000018 9763 2
00000019 9763 1
0000001a 9765 2
0000001b 9765 1
0000001c 9765 2
0000001d 9763 1
0000001e 9765 2
0000001f 9765 1
00000020 9764 2
00000021 9765 1
00000022 9765 2
00000023 9765 1
00000024 9763 2
00000025 7028 2
TOTALS 368319 21
I've created a test class that reproduces the problem. It might be possible to make it even more simple, but I haven't had time to work on it to much.
Executing this test with 500K values does not reproduce the problem. Can someone please help me shed some light on this issue?
I'm using 3.2.13 and the following properties file:
je.env.isTransactional=true
je.env.isLocking=true
je.env.isReadOnly=false
je.env.recovery=true
je.log.fileMax=10000000
je.cleaner.minUtilization=75
je.cleaner.lookAheadCacheSize=262144
je.cleaner.readSize=1048576
je.maxMemory=104857600
Test Class
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.Properties;
import com.sleepycat.bind.EntityBinding;
import com.sleepycat.bind.EntryBinding;
import com.sleepycat.bind.tuple.StringBinding;
import com.sleepycat.bind.tuple.TupleBinding;
import com.sleepycat.collections.CurrentTransaction;
import com.sleepycat.collections.StoredMap;
import com.sleepycat.je.Database;
import com.sleepycat.je.DatabaseConfig;
import com.sleepycat.je.DatabaseEntry;
import com.sleepycat.je.DatabaseException;
import com.sleepycat.je.Environment;
import com.sleepycat.je.EnvironmentConfig;
public class LogFileTest3 {
private long totalSize = 0;
private Environment env;
private Database myDb;
private StoredMap storedMap_ = null;
public LogFileTest3() throws DatabaseException, FileNotFoundException, IOException {
Properties props = new Properties();
props.load(new FileInputStream("test3.properties"));
EnvironmentConfig envConfig = new EnvironmentConfig(props);
envConfig.setAllowCreate(true);
File envDir = new File("test3");
if(envDir.exists()==false) {
envDir.mkdir();
env = new Environment(envDir, envConfig);
DatabaseConfig dbConfig = new DatabaseConfig();
dbConfig.setAllowCreate(true);
dbConfig.setTransactional(true);
dbConfig.setSortedDuplicates(false);
myDb = env.openDatabase(null, "testing", dbConfig);
EntryBinding keyBinding = TupleBinding.getPrimitiveBinding(String.class);
EntityBinding valueBinding = new TestValueBinding();
storedMap_ = new StoredMap(myDb, keyBinding, valueBinding, true);
public void cleanup() throws Exception {
myDb.close();
env.close();
private void insertValues(int count) throws DatabaseException {
CurrentTransaction ct = CurrentTransaction.getInstance(this.env);
try {
ct.beginTransaction(null);
int i = 0;
while(i < count) {
TestValue tv = createTestValue(i++);
storedMap_.put(tv.key, tv);
System.out.println("Written "+i+" values for a total of " totalSize" bytes");
ct.commitTransaction();
} catch(Throwable t) {
System.out.println("Exception " + t);
t.printStackTrace();
ct.abortTransaction();
private TestValue createTestValue(int i) {
TestValue t = new TestValue();
t.key = "key_"+i;
t.value = "value_"+i;
return t;
public static void main(String[] args) throws Exception {
LogFileTest3 test = new LogFileTest3();
if(args[0].equalsIgnoreCase("clean")) {
while(test.env.cleanLog() != 0);
} else {
test.insertValues(Integer.parseInt(args[0]));
test.cleanup();
static private class TestValue {
String key = null;
String value = null;
private class TestValueBinding implements EntityBinding {
public Object entryToObject(DatabaseEntry key, DatabaseEntry entry) {
TestValue t = new TestValue();
t.key = StringBinding.entryToString(key);
t.value = StringBinding.entryToString(key);
return t;
public void objectToData(Object o, DatabaseEntry entry) {
TestValue t = (TestValue)o;
StringBinding.stringToEntry(t.value, entry);
totalSize += entry.getSize();
public void objectToKey(Object o, DatabaseEntry entry) {
TestValue t = (TestValue)o;
StringBinding.stringToEntry(t.key, entry);
}Yup, that solves the issue. By doubling the
je.maxMemory property, I've made the problem
disapear.Good!
How large is the lock on 64 bit architecture?Here's the complete picture for read and write locks. Read locks are taken on get() calls without LockMode.RMW, and write locks are taken on get() calls with RMW and all put() and delete() calls.
Arch Read Lock Write Lock
32b 96B 128B
64b 176B 216B
I'm setting the je.maxMemory property becauce I'm
dealing with many small JE environments in a single
VM. I don't want each opened environment to use 90%
of the JVM RAM...OK, I understand.
I've noticed that the je.maxMemory property is
mutable at runtime. Would setting a large value
before long transactions (and resetting it after) be
a feasable solution to my problem? Do you see any
potential issue by doing this?We made the cache size mutable for just this sort of use case. So this is probably worth trying. Of course, to avoid OutOfMemoryError you'll have to reduce the cache size of other environments if you don't have enough unused space in the heap.
Is there a way for me to have JE lock multiple
records at the same time? I mean have it create a
lock for a insert batch instead of every item in the
batch...Not currently. But speaking of possible future changes there are two things that may be of interest to you:
1) For large transaction support we have discussed the idea of providing a new API that locks an entire Database. While a Database is locked by a single transaction, no individual record locks would be needed. However, all other transactions would be blocked from using the Database. More specifically, a Database read lock would block other transactions from writing and a Database write lock would block all access by other transactions. This is the equivalent of "table locking" in relational DBs. This is not currently high on our priority list, but we are gathering input on this issue. We are interested in whether or not a whole Database lock would work for you -- would it?
2) We see more and more users like yourself that open multiple environments in a single JVM. Although the cache size is mutable, this puts the burden of efficient memory management onto the application. To solve this problem, we intend to add the option of a shared JE cache for all environments in a JVM process. The entire cache would be managed by an LRU algorithm, so if one environment needs more memory than another, the cache dynamically adjusts. This is high on our priority list, although per Oracle policy I can't say anything about when it will be available.
Besides increasing the je.maxMemory, do you see any
other solution to my problem?Use smaller transactions. ;-) Seriously, if you have not already ruled this out, you may want to consider whether you really need an atomic transaction. We also support non-transactional access and even a non-locking mode for off-line bulk loads.
Thanks a bunch for your help!You're welcome!
Mark
Maybe you are looking for
-
FM to change movement type in delivery
Hi, Is there any SAP FM / BAPI available to change the movement type of items in an outbound delivery. We have a requirement to change the movement type before PGI is done on the delivery.. Thanks-
-
Enable access for assistive devices in Mavericks-how?
How is "access for assistive devices" enabled in OS 10.9.3? I've looked for it just about everywhere in System Preferences. A third-party utility that I use needs it to work fully-and says at startup that it is not switched on.
-
Hi, I am migrating procedure from sql server to oracle. In sql server there is a procedure which taken XML input & after reading XML it insert's data into table. Can you please help me if there are any web sites which help me to understand all the XP
-
Something recently changed: website requires password for administrative work. I enter, go to an area, then I have to re-enter password. Go to the next step, reenter again. In the past, login, then can do work. No one else in my group is having
-
Moving around the canvas in Preview
Without a trackpad, is there an easy way to navigate around a large zoomed in document while in Preview? (using scroll bars is not easy) In most Adobe apps, if I hold down the spacebar, I get a hand tool that allows me to pan around an image. But I d