SQL statements in Orchestration take too long
Hello all,
I have an orchestration that inside has a Expression Widget.
In this Expression widget I call several methods from an assembly that I wrote in VstudioC#.
This assembly get some values from the files beeing processed and does some checking (empty values, accepted values, etc..) and does some select/insert queries into a separate database that I use for tracking/analysis.
These sql statement from some time now are taking just too long, taking each file around 5 minutes inside the Expression and some of the sql statement taking as long as 1 minute.
I've cleaned up the tables from my separate database, as they had more that 2 million records.
The queries are simple, without joins , etc...
Any ideas on hot to make this work faster?
Why are you doing SQL Operations in an Expression Shape?
The standard BizTalk Pattern would be by the WCF SQL Adapter.
Similar Messages
-
Update statement takes too long to run
Hello,
I am running this simple update statement, but it takes too long to run. It was running for 16 hours and then I cancelled it. It was not even finished. The destination table that I am updating has 2.6 million records, but I am only updating 206K records. If add ROWNUM <20 to the update statement works just fine and updates the right column with the right information. Do you have any ideas what could be wrong in my update statement? I am also using a DB link since CAP.ESS_LOOKUP table resides in different db from the destination table. We are running 11g Oracle Db.
UPDATE DEV_OCS.DOCMETA IPM
SET IPM.XIPM_APP_2_17 = (SELECT DISTINCT LKP.DOC_STATUS
FROM [email protected] LKP
WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND
IPM.XIPMSYS_APP_ID = 2
WHERE
IPM.XIPMSYS_APP_ID = 2;
Thanks,
Ilyamatthew_morris wrote:
In the first SQL, the SELECT against the remote table was a correlated subquery. the 'WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND IPM.XIPMSYS_APP_ID = 2" means that the subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated. This might have meant thousands of iterations, meaning a great deal of network traffic (not to mention each performing a DISTINCT operation). Queries where the data is split between two or more databases are much more expensive than queries using only tables in a single database.Sorry to disappoint you again, but with clause by itself doesn't prevent from "subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated". For example:
{code}
SQL> set linesize 132
SQL> explain plan for
2 update emp e
3 set deptno = (select t.deptno from dept@sol10 t where e.deptno = t.deptno)
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3247731149
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
Remote SQL Information (identified by operation id):
3 - SELECT "DEPTNO" FROM "DEPT" "T" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
16 rows selected.
SQL> explain plan for
2 update emp e
3 set deptno = (with t as (select * from dept@sol10) select t.deptno from t where e.deptno = t.deptno)
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3247731149
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
Remote SQL Information (identified by operation id):
3 - SELECT "DEPTNO" FROM "DEPT" "DEPT" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
16 rows selected.
SQL>
{code}
As you can see, WITH clause by itself guaranties nothing. We must force optimizer to materialize it:
{code}
SQL> explain plan for
2 update emp e
3 set deptno = (with t as (select /*+ materialize */ * from dept@sol10) select t.deptno from t where e.deptno = t.deptno
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3568118945
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 87 (17)| 00:00:02 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL | EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
| 4 | LOAD AS SELECT | SYS_TEMP_0FD9D6603_1CEEEBC | | | | | | |
| 5 | REMOTE | DEPT | 4 | 80 | 3 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
|* 6 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 | | |
| 7 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6603_1CEEEBC | 4 | 80 | 2 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
6 - filter("T"."DEPTNO"=:B1)
Remote SQL Information (identified by operation id):
PLAN_TABLE_OUTPUT
5 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'SOL10' )
25 rows selected.
SQL>
{code}
I do know hint materialize is not documented, but I don't know any other way besides splitting statement in two to materialize it.
SY. -
Sql Query takes too long to enter into the first line
Hi Friends,
I am using SQLServer 2008. I am running the query for fetching the data from database. when i am running first time after executed the "DBCC FREEPROCCACHE" query for clear cache memory, it takes too long (7 to 9 second) to enter into first
line of the stored procedure. After its enter into the first statement of the SP, its fetching the data within a second. I think there is no problem with Sqlquery. Kindly let me know if you know the reason behind this.
Sample Example:
Create Sp Sp_Name
as
Begin
print Getdate()
Sql statements for fetching datas
Print Getdate()
End
In the above example, there is no difference between first date and second date.
Please help me to trouble shooting this problem.
Thanks & Regards,
Rajkumar.Ri am running first time after executed the "DBCC FREEPROCCACHE" query for clear cache memory, it takes too long (7 to 9 second)
Additional to Manoj: A
DBCC FREEPROCCACHE clears the procedure cache, so all store procedure must be newly compilied on the first call.
Olaf Helper
[ Blog] [ Xing] [ MVP] -
Accessing BKPF table takes too long
Hi,
Is there another way to have a faster and more optimized sql query that will access the table BKPF? Or other smaller tables that contain the same data?
I'm using this:
select bukrs gjahr belnr budat blart
into corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and monat in so_monat.
The report is taking too long and is eating up a lot of resources.
Any helpful advice is highly appreciated. Thanks!Hi max,
I also tried using BUDAT in the where clause of my sql statement, but even that takes too long.
select bukrs gjahr belnr budat blart monat
appending corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and budat in so_budat.
I also tried accessing the table per day, but it didn't worked too...
while so_budat-low le so_budat-high.
select bukrs gjahr belnr budat blart monat
appending corresponding fields of table i_bkpf
from bkpf
where bukrs eq pa_bukrs
and gjahr eq pa_gjahr
and blart in so_DocTypes
and budat eq so_budat-low.
so_budat-low = so_budat-low + 1.
endwhile.
I think our BKPF tables contains a very large set of data. Is there any other table besides BKPF where we could get all accounting document numbers in a given period? -
PL-SQL-ORA-01704 - String literal too long
Hello guyz;
I am trying to store a value of over 4000 character long in a CLOB column and I got the error message that says "PL-SQL-ORA-01704 - String literal too long".
What can I do to overcome this challenge?
Thanking you for your usual support.sb92075 wrote:
Problem Exists Between Keyboard And Chair
We can't say what you are doing wrong since we don't know specifically what you actually do.Okay let me put it down this way.
I have an application using SQL Server as d backend engine & now, the user wants to migrate to Oracle. I now wrote a mini-program to create a schema/user in oracle with the schema/database (being used by the app) from SQL server. I verified the structure very well & every is just fine. Now, data migration (from SQL Server to Oracle).
I was able to move most tables data successfully without issue until I attempted to load a table which has a column (in SQL defined as text with over 4000 (var)chars/CLOB in Oracle). On moving a particular row to oracle db (after few rows have already been INSERTed into this particular table x), I got that err msg.
After battling with that for a while, I concluded to make d (DataMigrator) app take just d first 4000 string - only if the value in that field value length > 4000. This worked perfectly without issue but you know the implication - Data lost.
Do I need to switch something on/off in Oracle that expands the CLOB default maximum field size? Because I foresee this happening as soon as the application (that would now sit on Oracle) is now in use.
If you still don't understand this, I don't know how beta 2 explain this!
Edited by: aweklin on Mar 17, 2013 8:25 AM
Edited by: aweklin on Mar 17, 2013 8:25 AM
Edited by: aweklin on Mar 17, 2013 8:27 AM
Edited by: aweklin on Mar 17, 2013 8:27 AM -
My Query takes too long ...
Hi ,
Env , DB 10G , O/S Linux Redhat , My DB size is about 80G
My query takes too long , about 5 days to get results , can you please help to rewrite this query in a better way ,
declare
x number;
y date;
START_DATE DATE;
MDN VARCHAR2(12);
TOPUP VARCHAR2(50);
begin
for first_bundle in
select min(date_time_of_event) date_time_of_event ,account_identifier ,top_up_profile_name
from bundlepur
where account_profile='Basic'
AND account_identifier='665004664'
and in_service_result_indicator=0
and network_cause_result_indicator=0
and DATE_TIME_OF_EVENT >= to_date('16/07/2013','dd/mm/yyyy')
group by account_identifier,top_up_profile_name
order by date_time_of_event
loop
select sum(units_per_tariff_rum2) ,max(date_time_of_event)
into x,y
from OLD_LTE_CDR
where account_identifier=(select first_bundle.account_identifier from dual)
and date_time_of_event >= (select first_bundle.date_time_of_event from dual)
and -- no more than a month
date_time_of_event < ( select add_months(first_bundle.date_time_of_event,1) from dual)
and -- finished his bundle then buy a new one
date_time_of_event < ( SELECT MIN(DATE_TIME_OF_EVENT)
FROM OLD_LTE_CDR
WHERE DATE_TIME_OF_EVENT > (select (first_bundle.date_time_of_event)+1/24 from dual)
AND IN_SERVICE_RESULT_INDICATOR=26);
select first_bundle.account_identifier ,first_bundle.top_up_profile_name
,FIRST_BUNDLE.date_time_of_event
INTO MDN,TOPUP,START_DATE
from dual;
insert into consumed1 VALUES(X,topup,MDN,START_DATE,Y);
end loop;
COMMIT;
end;> where account_identifier=(select first_bundle.account_identifier from dual)
Why are you doing this? It's a completely unnecessary subquery.
Just do this:
where account_identifier = first_bundle.account_identifier
Same for all your other FROM DUAL subqueries. Get rid of them.
More importantly, don't use a cursor for loop. Just write one big INSERT statement that does what you want. -
Drill Through reports takes too long
Hi all,
I need some suggestions/help with our drill through reports. We are using Hyperion 11.1.1.3 and the cube is ASO.
We have drill through reports set up in Essbase studio for drilling down from Essbase to Oracle database. It takes too long (like 30 mins for fetching 1000 records) and the query is simple.
What are the changes that we can do to bring down this time. Please advice.
Thanks.Hi Glenn,
We tried optimizing the drill through SQL query but actually when we run the directly in TOAD it takes *23 secs* but when we do drill through on the same intersection
it took more than 25 mins. Following is our query structure :
(SELECT *
FROM "Table A" cp_594
INNER JOIN "Table B" cp_595 ON (cp_594.key = cp_595.key)
WHERE (Upper(cp_595.*"Dim1"*) in (select Upper(CHILD) from (SELECT * FROM DIM_TABLE_1 where CUBE = 'ALL') WHERE CONNECT_BY_ISLEAF = 1 START WITH PARENT = $$Dim1$$ CONNECT BY PRIOR CHILD = PARENT UNION ALL select Upper(CHILD) from DIM_TABLE_1 where CUBE = 'ALL' AND REPLACE('GL_'||CHILD, 'GL_IC_', 'IC_') = $$Dim1$$))
And ----same for 5 more dimensions
Can you suggest some improvement ? Please advice.
Thanks -
Takes too long to hibernate when I close the lid - Also random device noise when it boots up
Hello guys.
Ever since i've wiped the machine, i've been having these two problems. When I close the lid, it used to go to sleep straight away, but now I can see the sleep light (and the power button) flash and flash, then it goes to sleep.
When waking up it goes through the lenovo startup screen and resuming windows, then it asks for a password, before I used to open the lid and it would ask me for the password straight away. I know it was going to sleep because I could hear the beep straight away when I closed and opened it, but now it just takes too long.
Also, everytime I boot up into windows or resume into windows from a sleep state, I can hear the device noise, like something is being plugged in/out. But there is nothing being plugged in or out at the time. I can't get to device manager quick enough to see what it is that is causing it.
But all drivers seem okay.
Thanks in advance.
Sam.
EDIT: Also noticed, when the lid is closed, randomly the laptop turns off (hear the beep) an then turns back off again.
Weird.
T420 model number: 4180-PR1 with OS: Windows 7 Pro 64 bit
T420 4180-PR1 - OS: Windows 7 Pro 64 bit
Solved!
Go to Solution.Hi Sam,
is this to do with the T420 model number: 4180-PR1 with OS: Windows 7 64 bit installed on it as in another thread you posted in?
Maybe you could pop the information into your signature; members like to know which system and OS are involved. At the top next to Sign Out choose My Settings > Personal Profile > Personal Information - Signature
Andy ______________________________________
Please remember to come back and mark the post that you feel solved your question as the solution, it earns the member + points
Did you find a post helpfull? You can thank the member by clicking on the star to the left awarding them Kudos Please add your type, model number and OS to your signature, it helps to help you. Forum Search Option T430 2347-G7U W8 x64, Yoga 10 HD+, Tablet 1838-2BG, T61p 6460-67G W7 x64, T43p 2668-G2G XP, T23 2647-9LG XP, plus a few more. FYI Unsolicited Personal Messages will be ignored.
Deutsche Community Comunidad en Español English Community Русскоязычное Сообщество
PepperonI blog -
I have been having lagging problems with YouTube videos for a number of months now. It simply takes so long for the red bar to completely load and the videos frequently pause and take too long to restart playing again. Even with little 2 and 3 minute videos.
I have a fast computer and my webpages load really, really fast. I have FireFox 4 browser and a Vista Home Premium 64-bit OS. So I don't have a slow computer or web browser. But these slow YouTube vids take way too long to load for some reason.
Does anyone have any idea how I can speed up YouTube?Hi
The forums are customer to customer in the first instance, Only the mods (BT) will ask for personal information through a email link. Would you mind posting your Hub stats & BT speed test results this will help all with diagnosis-
To post the full stats from your router
for home hub - 192.168.1.254
Navigate to ADSL Settings or use the A-Z at the top right
Click on More Details and then post the results.
Run BT speed tester and post the results from http://speedtester.bt.com/
If possible it would be best to connect to the BT master socket this will rule out any telephone/broadband extension wiring also consider the housing of the hub/router as anything electrical can cause problems as these are your responsibility, if these are found to be the case Openreach (engineer) will charge BT Broadband, which will be passed onto you, around £130.00.
Noisy line! When making telephone calls, if so this is not good for your broadband, you can check-
Quite line test dial 17070 option 2 and listen - should hear nothing best done with old type analogue phone digital (dect) will do but may have slight hiss If you hear noise- crackling pops etc, report it as a noisy line on your phone, don’t mention broadband, Bt Faults on 151.
As for your FTTC its available in some areas between 40% & 80% of customers in enabled areas can receive it!
Mortgage Advisor 2000-2008
Green Energy Advisor 2008-2010
Charity Health Care Provider Advisor 2010-
I'm alright Jack.... -
Many times my computer takes too long to connect to new website. I have wireless internet (time capsule) and I am running a pretty powerful real time financial work program at same time, what is the best solution? Upgrading speed from cable network? is it a hard drive issue? do I only need to "clean out" the computer? Or all of the above...not to computer saavy. It is a Macbook Pro osx 10.6.8 (late 2010).
Almost certainly none of the above! Try each of the following in this order:
Select 'Reset Safari' from the Safari menu.
Close down Safari; move <home>/Library/Caches/com.apple.Safari/Cache.db to the trash; restart Safari.
Change the DNS servers in your network settings to use the OpenDNS servers: 208.67.222.222 and 208.67.220.220
Turn off DNS pre-fetching by entering the following command in Terminal and restarting Safari:
defaults write com.apple.safari WebKitDNSPrefetchingEnabled -boolean false -
Hi!
I am in troubble
following is the query
SELECT inv_no, inv_name, inv_desc, i.cat_id, cat_name, i.sub_cat_id,
sub_cat_name, asset_cost, del_date, i.bl_id, gen_desc bl_desc, p.prvcode, prvdesc, cur_loc,
pldesc, i.pmempno, pmname, i.empid, empname
FROM inv_reg i,
cat_reg c,
sub_cat_reg s,
gen_desc_reg g,
ploc p,
province r,
pmaster m,
iemp_reg e
WHERE i.sub_cat_id = s.sub_cat_id
AND i.cat_id = s.cat_id
AND s.cat_id = c.cat_id
AND i.bl_id = g.gen_id
AND i.cur_loc = p.plcode
AND p.prvcode = r.prvcode
AND i.pmempno = m.pmempno(+)
AND i.empid = e.empid(+)
&wc
order by prvdesc, pldesc, cat_name, sub_cat_name, inv_no
and this query returns 32000 records
when i run this query on reports 10g
then it takes 10 to 20 minuts to generate report
how can i optimize it...?Hi Waqas Attari
Pls study & try this ....
When your query takes too long ...
hope it helps....
Regards,
Abdetu... -
OPM process execution process parameters takes too long time to complete
PROCESS_PARAMETERS are inserted every 15 min. using gme_api_pub packages. some times it takes too long time to complete the batch ,ie completion of request. it takes about 5-6 hrs long time ,in other time s it takes only 15-20 mins.This happens at regular interval...if anybody can guide me I will be thankful to him/her..
thanks in advance.
regds,
ShaileshGenerally the slowest part of the process is in the extraction itself...
Check in your source system and see how long the processes are taking, if there are delays, locks or dumps in the database... If your source is R/3 or ECC transactions like SM37, SM21, ST22 can help monitor this activity...
Consider running less processes in parallel if you have too many and see some delays in jobs... Also indexing some of the tables in the source system to expedite the extraction, make sure there are no heavy processes or interfaces running in the source system at the same time you're trying to load... Check with your Basis guys for activity peaks and plan accordingly...
In BW also check in your SM21 for database errors or delays...
Just some ideas... -
Web application deployment takes too long?
Hi All,
We have a wls 10.3.5 clustering environment with one admin server and two managered servers separately. When we try to deploy a sizable web application, it takes about 1 hour to finish. It seems that it takes too long to finish the deployment. Here is the output from one of two managerd server system log. Could anyone tell me it is normal or not? If not, how can I improve this?
Thanks in advance,
John
+####<Feb 29, 2012 12:11:03 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535463373> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:05 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <9baa7a67b5727417:26f76f6c:135ca05cff2:-8000-00000000000000b0> <1330535465664> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
+####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442300> <BEA-320143> <Scheduled 1 data retirement tasks as per configuration.>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive HarvestedDataArchive>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive HarvestedDataArchive. Retired 0 records in 0 ms.>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive EventsDataArchive>+
+####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive EventsDataArchive. Retired 0 records in 0 ms.>+
+####<Feb 29, 2012 1:10:23 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <weblogic.cluster.MessageReceiver> <<WLS Kernel>> <> <> <1330539023098> <BEA-003107> <Lost 2 unicast message(s).>+
+####<Feb 29, 2012 1:10:36 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539036105> <BEA-000111> <Adding Pinellas1tMS2 with ID -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 to cluster: Pinellas1tCluster1 view.>+
+####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084375> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
+####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084507> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
Edited by: john wang on Feb 29, 2012 10:36 AM
Edited by: john wang on Feb 29, 2012 10:37 AM
Edited by: john wang on Feb 29, 2012 10:38 AMHi John,
There may be some circumstances like when there are many files in the WEB-INF folder and JPS don't use TLD.
I don't think a 1hour deployment is normal, it should be much more faster.
Since you are using 10.3.5, I suggesto you to install the corresponding patch:
1. Download patch 10118941p10118941_1035_Generic.zip
2. Uncompress the file p10118941_1035_Generic.zip
3. Copy the required files (patch-catalog_XXXXX.xml, CIRF.jar ) to the Patch Download Directory (typically, this folder is <WEBLOGIC_HOME>/utils/bsu/cache_dir).
4. Rename the file patch-catalog_XXXXX.xml into patch-catalog.xml .
5. Start Smart Update from <WEBLOGIC_HOME>/utils/bsu/bsu.sh .
6. Select "Work Offline" mode.
7. Go to File->Preferences, and select "Patch Download Directory".
8. Click "Manage Patches" on the right panel.
9. You will see the patch in the panel below (Downloaded Patches)
10. Click "Apply button" of the downloaded patch to apply it to the target installation and follow the instructions on the screen.
11. Add "-Dweblogic.jsp.ignoreTLDsProcessingInWebApp=true" to the Java options to ignore additional findTLDs cost.
12. Restart servers.
Hope this helps.
Thanks,
Cris -
RPURMP00 program takes too long
Hi Guys,
Need some help on this one guys. Not getting any where with this issue.
I am running RPURMP00 ( Program to Create Third-Party Remittance Posting Run ) and while running it in test mode for 1 employee it takes too long .
I ran this in background during off hours , but it takes 19,000 + sec to run and then cancels .
The long text message is No entry in table T51R6_FUNDINFO (Remittance detail table for all entities) for key 0002485844 and Job cancelled after system exception ERROR_MESSAGE
I check the program and I found a nested loop within the program (include RPURMP02 ) and decided to debug it with a break point.
It short dumped and here is the st22 message and source code extract.
----Message -
" Time limit exceeded ".
"The program "RPURMP00" has exceeded the maximum permitted runtime without
Interruption and has therefore been terminated."
----Source code extract -
Include RPURMP02
172 &----
173 *& Form get_advice_info
174 &----
175 * text
176 ----
177 * --> p1 text
178 * <-- p2 text
179 ----
180 FORM get_advice_info .
181
182 * get information for advice form only if vendor sub-group and
183 * employee detail is maintained
184 IF ( NOT t51rh-lifsg IS INITIAL ) AND
185 ( NOT t51rh-hrper IS INITIAL ).
186
187 * get remittance items employee number
188 SELECT * FROM t51r4 WHERE remky = t51r5-remky. "#EC CI_GENBUFF "SAM0632658
189 * get payroll seqno determined by PERNR and RDATN
>>>>> SELECT * FROM t51r8 WHERE pernr = t51r4-pernr
191 AND rdatn = t51r5-rdatn
192 ORDER BY PRIMARY KEY. "#EC CI_GENBUFF
193 EXIT.
194 ENDSELECT.
Has anyone ever come across this situation? Any input from anyone on this?
Regards.
CJHi,
What is your SAP version?
Have you checked if some OSS notes is there on performance.
Regards,
Atish -
AME CS6 rendering with AE and Pr takes too long
Hi Guys,
Need some help here. i have rendered a 30 secs mp4 video with 1920 x 1080 HD format 25 frames w/o scripting in AME for 4 hours!
Why does it take too long? I have rendered a 2 minute video with same format w/ scripting but only spare less than 30 minutes for rendering.
Im using After Effects and Premium Pro both CS6 and using Dynamic Link in AME.
What seems to be wrong in my current settings?
Any help would be appreciated.
Thanks!This may be a waste of time, but it won't take a minute and is something you should always do whenever things go strangely wrong ............ trash the preferences, assuming you haven't done it already.
Many weird things happen as a result of corrupt preferences which can create a vast range of different symptoms, so whenever FCP X stops working properly in any way, trashing the preferences should be the first thing you do using this free app.
http://www.digitalrebellion.com/prefman/
Shut down FCP X, open PreferenceManager and in the window that appears:-
1. Ensure that only FCP X is selected.
2. Click Trash
The job is done instantly and you can re-open FCP X.
There is absolutely no danger in trashing preferences and you can do it as often as you like.
The preferences are kept separately from FCP X and if there aren't any when FCP X opens it automatically creates new ones . . . instantly.
Maybe you are looking for
-
How do my wife and I both share photos from our iphones to one mac via clouds photostream?
My wife and I have seperate apple store id's, and we have one main mac desktop updated with lion and icloud. when I take photos on my iphone 4 it synchs and shows up via photostream to my mac desktop. When she takes a photo using her phone, it doesn'
-
Hi I lost the ICloud password and the mail address for it is freezing by the hostage ( hotmail ) so how can I enter my ICloud Account
-
How to Export local security setting all filed name & value against filed.
HI all, I am trying to export local security setting from local policy using bellow scrip. but it is showing only these are configured. I need expert help which allowed me to export all filed with value where it is configure or not. Please give me. $
-
[SOLVED] XFS partition or Harddrive failing
Hello My /home XFS-formatted partition is in different disk (seagate 750G). Lately it has begun failing constantly when doing lots of copying/moving work with following type of errors: - cannot read first 512 bytes - mount: special device "dev/mount-
-
Cannot update batch header no matter how I try in MB_DOCUMENT_BADI
I am trying to update next inspection date via MIGO when Goods Receipting a production order which is held in the batch header and in a classification we have set up. I have used function modules to try and perform the update. I have used both 'VB_U