Mac-address-table takes too long to update on 4507R
Hi,
I'm trying to use spectralink phones, I have autonomous APs on seperate 2960s (and one on the 4507) - when the phones roam between the APs it takes very long for the mac address table to update on the 4507 even though there's uplink traffic.
I have supervisor engine II+ (I read there are problems in previous versions)
can anyone help?
thanks
Upgrading the switch could help avoid this problem..
Similar Messages
-
Accessing BKPF table takes too long
Hi,
Is there another way to have a faster and more optimized sql query that will access the table BKPF? Or other smaller tables that contain the same data?
I'm using this:
select bukrs gjahr belnr budat blart
into corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and monat in so_monat.
The report is taking too long and is eating up a lot of resources.
Any helpful advice is highly appreciated. Thanks!Hi max,
I also tried using BUDAT in the where clause of my sql statement, but even that takes too long.
select bukrs gjahr belnr budat blart monat
appending corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and budat in so_budat.
I also tried accessing the table per day, but it didn't worked too...
while so_budat-low le so_budat-high.
select bukrs gjahr belnr budat blart monat
appending corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and budat eq so_budat-low.
so_budat-low = so_budat-low + 1.
endwhile.
I think our BKPF tables contains a very large set of data. Is there any other table besides BKPF where we could get all accounting document numbers in a given period? -
Deleting 1 row from a table takes too long...why?
We are running the following query...
delete gemdev.lu_messagecode where mess_code ='SSY'
and it takes way too long as there is only 1 record in this table with SSY as the mess_code.
SQL> set timing on;
SQL> delete gemdev.lu_messagecode where mess_code ='SSY';
1 row deleted
Executed in 293.469 seconds
The table structure is very simple as you can see below.
CREATE TABLE GEMDEV.LU_MESSAGECODE
MESS_CODE VARCHAR2(3) NOT NULL,
ROUTE_CODE VARCHAR2(4) NULL,
REPORT_CES_MNEMONIC VARCHAR2(3) NULL,
CONSTRAINT SYS_IOT_TOP_52662
PRIMARY KEY (MESS_CODE)
VALIDATE
ORGANIZATION INDEX
NOCOMPRESS
TABLESPACE IWORKS_IOT
LOGGING
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE(BUFFER_POOL DEFAULT)
PCTTHRESHOLD 50
NOPARALLEL
ALTER TABLE GEMDEV.LU_MESSAGECODE
ADD CONSTRAINT LU_ROUTECODE_FK3
FOREIGN KEY (ROUTE_CODE)
REFERENCES GEMDEV.LU_ROUTECODE (ROUTE_CODE)
ENABLE
ALTER TABLE GEMDEV.LU_MESSAGECODE
ADD CONSTRAINT MSGCODE_FK_CESMNEMONIC
FOREIGN KEY (REPORT_CES_MNEMONIC)
REFERENCES GEMDEV.SYS_CESMNEMONIC (CES_MNEMONIC)
ENABLE
My explain reads as follows.
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | DELETE STATEMENT | | | | 1 (100)| |
| 1 | DELETE | LU_MESSAGECODE | | | | |
| 2 | INDEX UNIQUE SCAN| SYS_IOT_TOP_52662 | 1 | 133 | 1 (0)| 00:00:01 |
Also in my AWR Sql Report I see this as well
Plan Statistics DB/Inst: IWORKSDB/iworksdb Snaps: 778-780
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 521,102 N/A 12.0
CPU Time (ms) 73,922 N/A 5.1
Executions 0 N/A N/A
Buffer Gets 2,892,144 N/A 3.4
Disk Reads 2,847,609 N/A 8.6
Parse Calls 1 N/A 0.0
Rows 0 N/A N/A
User I/O Wait Time (ms) 475,882 N/A N/A
Cluster Wait Time (ms) 0 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 2 N/A N/A
Invalidations 1 N/A N/A
Version Count 1 N/A N/A
Sharable Mem(KB) 45 N/A N/A
Now, since the table only has 150 rows, and I am only try to delete 1 row, why is there so much disk read and why does it take 5 minutes to delete? This just weird. Does this have something to do with the Child tables?Any triggers on the table?
If you trace the session, what statement(s) seem to
be taking all that time?
JustinWell I traced my session and I noticed that my query does take a while, but I also noticed several other queries that I was not running. Not too sure where it came from. Have a look below. It is a sample from my TKPROF utility report.
delete gemdev.lu_messagecode
where
mess_code ='SSY'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.04 0 2 23 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.04 0 2 23 1
Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 57
Rows Row Source Operation
1 DELETE LU_MESSAGECODE (cr=3446672 pr=3442028 pw=0 time=309363335 us)
1 INDEX UNIQUE SCAN SYS_IOT_TOP_52662 (cr=2 pr=0 pw=0 time=35 us)(object id 52663)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 35.87 35.87
select /*+ all_rows */ count(1)
from
"GEMDEV"."TBLCLAIMCHARGE" where "CONTRACT_FEE_MESS_CODE" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 10.53 44.95 381779 382893 0 1
total 3 10.53 44.95 381779 382893 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 SORT AGGREGATE (cr=382893 pr=381779 pw=0 time=44953436 us)
0 TABLE ACCESS FULL TBLCLAIMCHARGE (cr=382893 pr=381779 pw=0 time=44953403 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 47795 0.03 37.87
db file sequential read 101 0.00 0.02
select /*+ all_rows */ count(1)
from
"GEMDEV"."TBLCLAIMCHARGE" where "FEE_INEL_MESS_CODE" = :1 -
Mac Book booting takes too long
My mac book pro has a booting problem. It takes forever at the apple logo and after sometime the apple logo vanishes but the loading icon still continues for a while. Which continues please advice
I tried it but it didn't work. The hard disk had totally failed so had to purchase a new one. A Toshiba when I try to format it using disk utility during set up, it gives me the following error : DISK ERASE FAILED WITH THE ERROR
WIPING VOLUME DATA TO PREVENT FUTURE ACCIDENTAL PROBING FAILED.
It's a new hard disk which has not yet been formatted at all. Could you please advise on the way forward ? -
Hi plz help me here .. I update my macbook air running with Mac 10.8.5 to yosemite and it will take too long to shut down and when stars it will like its gonig update?
More information would be helpful.
-
Update statement takes too long to run
Hello,
I am running this simple update statement, but it takes too long to run. It was running for 16 hours and then I cancelled it. It was not even finished. The destination table that I am updating has 2.6 million records, but I am only updating 206K records. If add ROWNUM <20 to the update statement works just fine and updates the right column with the right information. Do you have any ideas what could be wrong in my update statement? I am also using a DB link since CAP.ESS_LOOKUP table resides in different db from the destination table. We are running 11g Oracle Db.
UPDATE DEV_OCS.DOCMETA IPM
SET IPM.XIPM_APP_2_17 = (SELECT DISTINCT LKP.DOC_STATUS
FROM [email protected] LKP
WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND
IPM.XIPMSYS_APP_ID = 2
WHERE
IPM.XIPMSYS_APP_ID = 2;
Thanks,
Ilyamatthew_morris wrote:
In the first SQL, the SELECT against the remote table was a correlated subquery. the 'WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND IPM.XIPMSYS_APP_ID = 2" means that the subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated. This might have meant thousands of iterations, meaning a great deal of network traffic (not to mention each performing a DISTINCT operation). Queries where the data is split between two or more databases are much more expensive than queries using only tables in a single database.Sorry to disappoint you again, but with clause by itself doesn't prevent from "subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated". For example:
{code}
SQL> set linesize 132
SQL> explain plan for
2 update emp e
3 set deptno = (select t.deptno from dept@sol10 t where e.deptno = t.deptno)
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3247731149
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
Remote SQL Information (identified by operation id):
3 - SELECT "DEPTNO" FROM "DEPT" "T" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
16 rows selected.
SQL> explain plan for
2 update emp e
3 set deptno = (with t as (select * from dept@sol10) select t.deptno from t where e.deptno = t.deptno)
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3247731149
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
Remote SQL Information (identified by operation id):
3 - SELECT "DEPTNO" FROM "DEPT" "DEPT" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
16 rows selected.
SQL>
{code}
As you can see, WITH clause by itself guaranties nothing. We must force optimizer to materialize it:
{code}
SQL> explain plan for
2 update emp e
3 set deptno = (with t as (select /*+ materialize */ * from dept@sol10) select t.deptno from t where e.deptno = t.deptno
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3568118945
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 87 (17)| 00:00:02 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL | EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
| 4 | LOAD AS SELECT | SYS_TEMP_0FD9D6603_1CEEEBC | | | | | | |
| 5 | REMOTE | DEPT | 4 | 80 | 3 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
|* 6 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 | | |
| 7 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6603_1CEEEBC | 4 | 80 | 2 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
6 - filter("T"."DEPTNO"=:B1)
Remote SQL Information (identified by operation id):
PLAN_TABLE_OUTPUT
5 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'SOL10' )
25 rows selected.
SQL>
{code}
I do know hint materialize is not documented, but I don't know any other way besides splitting statement in two to materialize it.
SY. -
MAC Address-Table Move Update Feature
Hi guys
Does 6500 SUP720/2T support MAC Address-Table Move Update Feature?
I cannot find it in anywhere..
Thanks very much!
QXZHi,
Please refer following link :
http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a00807347ab.shtml
CAM
—All Catalyst switch models use a CAM table for Layer 2 switching. As frames arrive on switch ports, the source MAC addresses are learned and recorded in the CAM table. The port of arrival and the VLAN are both recorded in the table, along with a timestamp. If a MAC address learned on one switch port has moved to a different port, the MAC address and timestamp are recorded for the most recent arrival port. Then, the previous entry is deleted. If a MAC address is found already present in the table for the correct arrival port, only its timestamp is updated.
regards,
Ajay Kumar -
After updating my iphone 5s to 7.1 ... the keyboard is crashing and it takes too long to respond, what is the solution? if there any link from which we can download the original 7.1 firmware for A1533 Version?
Restoring the iPhone will reload its firmware.
Plug the iPhone into (the current version of) iTunes on your computer.
Let it sync. Then choose "Restore" from the iPhone's summary page in iTunes. -
Mac Pro takes too long to boost up
I am running Lion OSX Version 10.7.3 and it takes too long to boot up?
Any sugestions ? How can i fix it to boot faster.
ThanxHow long is to long.
Yes buy an SSD. Yes reinstall from scratch. Add more RAM. Clean out your present system. -
Mac book takes too long to sleep
My macbook takes too long to sleep
How long is to long.
Yes buy an SSD. Yes reinstall from scratch. Add more RAM. Clean out your present system. -
OPM process execution process parameters takes too long time to complete
PROCESS_PARAMETERS are inserted every 15 min. using gme_api_pub packages. some times it takes too long time to complete the batch ,ie completion of request. it takes about 5-6 hrs long time ,in other time s it takes only 15-20 mins.This happens at regular interval...if anybody can guide me I will be thankful to him/her..
thanks in advance.
regds,
ShaileshGenerally the slowest part of the process is in the extraction itself...
Check in your source system and see how long the processes are taking, if there are delays, locks or dumps in the database... If your source is R/3 or ECC transactions like SM37, SM21, ST22 can help monitor this activity...
Consider running less processes in parallel if you have too many and see some delays in jobs... Also indexing some of the tables in the source system to expedite the extraction, make sure there are no heavy processes or interfaces running in the source system at the same time you're trying to load... Check with your Basis guys for activity peaks and plan accordingly...
In BW also check in your SM21 for database errors or delays...
Just some ideas... -
Web application deployment takes too long?
Hi All,
We have a wls 10.3.5 clustering environment with one admin server and two managered servers separately. When we try to deploy a sizable web application, it takes about 1 hour to finish. It seems that it takes too long to finish the deployment. Here is the output from one of two managerd server system log. Could anyone tell me it is normal or not? If not, how can I improve this?
Thanks in advance,
John
+####<Feb 29, 2012 12:11:03 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535463373> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:05 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <9baa7a67b5727417:26f76f6c:135ca05cff2:-8000-00000000000000b0> <1330535465664> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442300> <BEA-320143> <Scheduled 1 data retirement tasks as per configuration.>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive HarvestedDataArchive>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive HarvestedDataArchive. Retired 0 records in 0 ms.>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive EventsDataArchive>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive EventsDataArchive. Retired 0 records in 0 ms.>+
+####<Feb 29, 2012 1:10:23 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <weblogic.cluster.MessageReceiver> <<WLS Kernel>> <> <> <1330539023098> <BEA-003107> <Lost 2 unicast message(s).>+
+####<Feb 29, 2012 1:10:36 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539036105> <BEA-000111> <Adding Pinellas1tMS2 with ID -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 to cluster: Pinellas1tCluster1 view.>+
+####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084375> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
+####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084507> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
Edited by: john wang on Feb 29, 2012 10:36 AM
Edited by: john wang on Feb 29, 2012 10:37 AM
Edited by: john wang on Feb 29, 2012 10:38 AMHi John,
There may be some circumstances like when there are many files in the WEB-INF folder and JPS don't use TLD.
I don't think a 1hour deployment is normal, it should be much more faster.
Since you are using 10.3.5, I suggesto you to install the corresponding patch:
1. Download patch 10118941p10118941_1035_Generic.zip
2. Uncompress the file p10118941_1035_Generic.zip
3. Copy the required files (patch-catalog_XXXXX.xml, CIRF.jar ) to the Patch Download Directory (typically, this folder is <WEBLOGIC_HOME>/utils/bsu/cache_dir).
4. Rename the file patch-catalog_XXXXX.xml into patch-catalog.xml .
5. Start Smart Update from <WEBLOGIC_HOME>/utils/bsu/bsu.sh .
6. Select "Work Offline" mode.
7. Go to File->Preferences, and select "Patch Download Directory".
8. Click "Manage Patches" on the right panel.
9. You will see the patch in the panel below (Downloaded Patches)
10. Click "Apply button" of the downloaded patch to apply it to the target installation and follow the instructions on the screen.
11. Add "-Dweblogic.jsp.ignoreTLDsProcessingInWebApp=true" to the Java options to ignore additional findTLDs cost.
12. Restart servers.
Hope this helps.
Thanks,
Cris -
RPURMP00 program takes too long
Hi Guys,
Need some help on this one guys. Not getting any where with this issue.
I am running RPURMP00 ( Program to Create Third-Party Remittance Posting Run ) and while running it in test mode for 1 employee it takes too long .
I ran this in background during off hours , but it takes 19,000 + sec to run and then cancels .
The long text message is No entry in table T51R6_FUNDINFO (Remittance detail table for all entities) for key 0002485844 and Job cancelled after system exception ERROR_MESSAGE
I check the program and I found a nested loop within the program (include RPURMP02 ) and decided to debug it with a break point.
It short dumped and here is the st22 message and source code extract.
----Message -
" Time limit exceeded ".
"The program "RPURMP00" has exceeded the maximum permitted runtime without
Interruption and has therefore been terminated."
----Source code extract -
Include RPURMP02
172 &----
173 *& Form get_advice_info
174 &----
175 * text
176 ----
177 * --> p1 text
178 * <-- p2 text
179 ----
180 FORM get_advice_info .
181
182 * get information for advice form only if vendor sub-group and
183 * employee detail is maintained
184 IF ( NOT t51rh-lifsg IS INITIAL ) AND
185 ( NOT t51rh-hrper IS INITIAL ).
186
187 * get remittance items employee number
188 SELECT * FROM t51r4 WHERE remky = t51r5-remky. "#EC CI_GENBUFF "SAM0632658
189 * get payroll seqno determined by PERNR and RDATN
>>>>> SELECT * FROM t51r8 WHERE pernr = t51r4-pernr
191 AND rdatn = t51r5-rdatn
192 ORDER BY PRIMARY KEY. "#EC CI_GENBUFF
193 EXIT.
194 ENDSELECT.
Has anyone ever come across this situation? Any input from anyone on this?
Regards.
CJHi,
What is your SAP version?
Have you checked if some OSS notes is there on performance.
Regards,
Atish -
Query designer takes too long to save a query
Hi dear SDN friends.
I´m working with query designer in BI 7 and sometimes it takes too long to save a query, about ten minutes. Sometimes it never ends saving and some other times it saves the same query in 1 minute.
Can anybody please give an advice about this behavior of the query designer?
We have recently update BI to sp18. In query designer I have sp5 revision 529
Best regards,
Karim ReyesHello Karim,
I would suggest testing this again in the latest Frontend Patch available (FEP 602). In FEPs 600, 601, & 602 there were some performance and stability improvements made which may correct your issue. If the issue persists, I would suggest then opening a Customer Message via Service Marketplace.
It can be downloaded from:
http://service.sap.com/swdc
u2192Download
u2192Support Packages and Patches
u2192Entry by Application Group
u2192SAP Frontend Components
u2192BI ADDON FOR SAPGUI
u2192 BI 7.0 ADDON FOR SAPGUI 7.10
u2192 BI 7.0 ADDON FOR SAPGUI 7.10
u2192Win32
See SAP Note 1085218 for planned FEP releases.
I hope that helps.
Regards,
Tanner Spaulding
SAP NetWeaver RIG Americas, BI -
Quantity conversion takes too long
Dear Gurus,
I'm having a problem with the query execution time when I convert the quantities of the materials in KGs.
I have done all the steps to set up material conversion with reference infoobject 0material and using a dynamic determination of conversion factor using central units of measurement (T006) otherwise reference infoobject.
With these settings the query takes too long to execute because of the large number of material codes. If I remove this conversion the query is executed very fast.
Any ideas? Do I have to create an index in UOM0MATE ODS? What am I missing here?
Regards,
PanosHi Panos,
I had the same issue, but it's solved for me now. I tried the same way you did, by creating a secondary index on the active table of the DSO. The only difference is, that I included all the SID fields to the index.
Did you mark your index as unique? Also make sure, that the index really is created on the DB.
If the performance still is not getting better, check in RSRT the statistics if the unit conversion really is the problem.
Kindly regards,
Matthias
Maybe you are looking for
-
I have a few issues with the sync Photos process. 1. After choosing to sync photos from a particular folder ('My Pictures' by default) I'm seeing that it will not include photos that are in 'My Pictures' if there are subfolders in 'My Pictures'. For
-
Problems after updating to version 4
I updated to version 4 today using iTunes as instructed. Everything went well but when I went into my music app to play all songs nothing is listed. However, songs are still listed by going into albums or artists. I deleted all tracks from iTunes lib
-
I'm new to Flash, Have got the new Flash 8 Pro. I'm trying to start at the beginning with tutorials to learn how to use flash. The first tutorial under the "getting started with flash" area is "building your first flash application" I have gone throu
-
How does datapump Technology works?
Hi ALL, I am new in Oracle, and I wanted to know about how to move data form Source to target database. But when I read oracle documentation, I found that datapump is the technology I can use. But I do not whether it is possible to move data from one
-
I recently purchase an IPOD classic for my husband. I have an IPOD Video. I run a windows xp machine and each of us has separate logins. Obviously we want to share our music. When I log on and open iTunes I have our whole music library showing on iTu