Taking too much time in setting up
i connect newly bought ipod to computer,it ask to set up to which i cliked done,and now its been half an hour an orange light is blinking, but no key is operating in itunes, i tried to eject but it says still working, usually how long it take to charge first time, what should i do now?
Hi,
Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
Also check ur DTP batch settings, ie how many no. of background processes used to perform DTP, Job class.
U can find these :
goto DTP, select goto menu and select "Settings for Batch Manager".
In the screen increase no of Processes from 3 to higher no(max 9).
ChaNGE job class to 'A'.
If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
Change these settings and run ur DTP one more time.
U can observer the difference.
Reddy
Similar Messages
-
Taking too much time to load application
Hi,
I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
I have another 10g server (same version) in which the same application is loading very fast.
When I checked the apache error logs found this :-
[Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
[Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
[Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
[Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
[Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
[Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
[Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
[Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
[Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
[Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
[Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
[Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
[Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
[Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
[Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
[Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
[Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
Please HELP ME...Hi this is what the solution given by your link
A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
Problem
To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
Solution
The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
Oc4jUserKeepalive on
Oc4jConnTimeout 12000 (or a similar value)
Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
ajp.keepalive=true
For example:
java -Dajp.keepalive=true -jar oc4j.jar
Please tell me where or which file i should put the option
java -Dajp.keepalive=true -jar oc4j.jar ??????/ -
While condition is taking too much time
I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
public static GroupHierEntity load(Connection con)
throws SQLException
internalCustomer=false;
String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
{ internalCustomer=true;}
// System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
// show all groups to internal customers and only their customer groups for external customers
if (internalCustomer) {
stmtLoad = con.prepareStatement(sqlLoad);
ResultSet rs = stmtLoad.executeQuery();
return new GroupHierEntity(rs); }
else
stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
stmtLoadExternal.setString(1, customerNameOfLogger);
stmtLoadExternal.setString(2, customerNameOfLogger);
// System.out.println("***** sql " +sqlLoadExternal);
ResultSet rs = stmtLoadExternal.executeQuery();
return new GroupHierEntity(rs);
GroupHierEntity ge = GroupHierEntity.load(con);
while(ge.next())
lvl = ge.getInt("lvl");
oid = ge.getLong("oid");
name = ge.getString("name");
if(internalCustomer) {
if (lvl == 2)
int i = getAlphaIndex(name);
super.setAppendRoot(alphaIndex);
gn = new GroupListDataNode(lvl+1,oid,name);
gn.setSelectable(true);
this.addNode(gn);
count++;
System.out.println("*** count "+ count);
ge.close();
========================
Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
while(ge.next())
{count++;}
Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
Thanks ,
balaI tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
I have similar queries that return some 800 rows , that only takes 1 sec.
I have doubt on resultset.next(). Any other alternative ? -
Report is taking too much time when running from parameter form
Dear All
I have developed report in oracle reports bulider 10g. while running it from report builder data is coming very fast.
But, If it is running from parameter form it is taking too much time to format report in PDF.
Please suggest any configuration or setting if anybody is having Idea.
ThanksHi,
The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
If you still cannot determine the difference then trace both sessions.
Rod West -
Auto Invoice Program taking too much time : problem with update sql
Hi ,
Oracle db version 11.2.0.3
Oracle EBS version : 12.1.3
Though we have a SEV-1 SR with oracle we have not been able to find much success.
We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it. I am attaching the explain plan for the for same. Its an update query. Please guide.
Plan
UPDATE STATEMENT ALL_ROWSCost: 0 Bytes: 124 Cardinality: 1
50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
27 FILTER
26 HASH JOIN Cost: 8,937,633 Bytes: 4,261,258,760 Cardinality: 34,364,990
24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413 Bytes: 446,744,870 Cardinality: 34,364,990
23 SORT UNIQUE Cost: 8,618,413 Bytes: 4,042,339,978 Cardinality: 34,364,990
22 UNION-ALL
9 FILTER
8 SORT GROUP BY Cost: 5,643,052 Bytes: 3,164,892,625 Cardinality: 25,319,141
7 HASH JOIN Cost: 1,640,602 Bytes: 32,460,436,875 Cardinality: 259,683,495
1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
6 HASH JOIN Cost: 853,567 Bytes: 22,544,143,440 Cardinality: 214,706,128
4 HASH JOIN Cost: 536,708 Bytes: 2,357,000,550 Cardinality: 29,835,450
2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008 Bytes: 1,163,582,550 Cardinality: 29,835,450
3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314 Bytes: 1,193,526,000 Cardinality: 29,838,150
5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951 Bytes: 3,123,197,116 Cardinality: 120,122,966
21 FILTER
20 SORT GROUP BY Cost: 2,975,360 Bytes: 877,447,353 Cardinality: 9,045,849
19 HASH JOIN Cost: 998,323 Bytes: 17,548,946,769 Cardinality: 180,916,977
13 VIEW VIEW AR.index$_join$_027 Cost: 108,438 Bytes: 867,771,256 Cardinality: 78,888,296
12 HASH JOIN
10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206 Bytes: 867,771,256 Cardinality: 78,888,296
11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322 Bytes: 867,771,256 Cardinality: 78,888,296
18 HASH JOIN Cost: 748,497 Bytes: 3,281,713,302 Cardinality: 38,159,457
14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
17 HASH JOIN Cost: 519,713 Bytes: 1,969,317,900 Cardinality: 29,838,150
15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822 Bytes: 716,115,600 Cardinality: 29,838,150
16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847 Bytes: 1,253,202,300 Cardinality: 29,838,150
25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552 Bytes: 5,158,998,615 Cardinality: 46,477,465
41 SORT GROUP BY Bytes: 75 Cardinality: 1
40 FILTER
39 MERGE JOIN CARTESIAN Cost: 11 Bytes: 75 Cardinality: 1
35 NESTED LOOPS Cost: 8 Bytes: 50 Cardinality: 1
32 NESTED LOOPS Cost: 5 Bytes: 30 Cardinality: 1
29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3 Bytes: 22 Cardinality: 1
28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2 Cardinality: 1
31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2 Bytes: 133,114,520 Cardinality: 16,639,315
30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 20 Cardinality: 1
33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2 Cardinality: 1
38 BUFFER SORT Cost: 9 Bytes: 25 Cardinality: 1
37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 25 Cardinality: 1
36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
49 SORT GROUP BY Bytes: 48 Cardinality: 1
48 FILTER
47 NESTED LOOPS
45 NESTED LOOPS Cost: 7 Bytes: 48 Cardinality: 1
43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4 Bytes: 20 Cardinality: 1
42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3 Cardinality: 1
44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 28 Cardinality: 1
As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
RegardsHi Paul, My bad. I am sorry I missed it.
Query as below :
UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
I understand that this could be a seeded query but my attempt is to tune it.
Regards -
hi
The following query is taking too much time (more than 30 minutes), working with 11g.
The table has three columns rid, ida, geometry and index has been created on all columns.
The table has around 5,40,000 records of point geometries.
Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
regardsI have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
a.ida='CORD' AND
b.idat='CORD' AND
a.rid !=b.rid AND
sdo_equal(a.geometry, b.geometry)='TRUE'
ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
[ c o d e ]
<code/results here>
[ / c o d e ]
that way the original formatting is kept and it makes things much easier to read.
Regards,
Stefan -
Hi all
I'm quite new to database administration.my problem is that i'm trying to import dump file but one of the table taking too much time to import .
Description::
1 Export taken from source database which is in oracle 8i character set is WE8ISO8859P1
2 I am taking import in 10 g with character set utf 8 and national character set is also same.
3 dump file is about 1.5 gb.
4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
5 while taking a import some table get import very fast bt at perticular table it get very slow
please help me thanks in advance.......Hello,
4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
5 while taking a import some table get import very fast bt at perticular table it get very slow For the point *4* it's typically due to the CHARACTER SET conversion.
You export data in WE8ISO8859P1 and import in UTF8. In WE8ISO8859P1 characters are encoded in *1 Byte* so *1 CHAR = 1 BYTE*. In UTF8 (Unicode) characters are encoded in up to *4 Bytes* so *1 CHAR > 1 BYTE*.
For this reason you'll have to modify the length of your CHAR or VARCHAR2 Columns, or add the CHAR option (by default it's BYTE) in the column datatype definition of the Tables. For instance:
VARCHAR2(100 CHAR)The NLS_LENGTH_SEMANTICS parameter may be used also but it's not very well managed by export/Import.
So, I suggest you this:
1. set NLS_LENGTH_SEMANTICS=CHAR on your target database and restart the database.
2. Create from a script all your Tables (empty) on the target database (without the indexes and constraints).
3. Import the datas to the Tables.
4. Import the Indexes and constraints.You'll have more information on the following Note of MOS:
Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]For the point *5* it may be due to the conversion problem you are experiencing, it may also due to some special datatype like LONG.
Else, I have a question, why do you choose UTF8 on your Target database and not AL32UTF8 ?
AL32UTF8 is recommended for Unicode uses.
Hope this help.
Best regards,
Jean-Valentin -
Sites Taking too much time to open and shows error
hi,
i've setup sharepoint 2013 environement correctly and created a site collection everything was working fine but suddenly now when i am trying to open that site collection or central admin site it's taking too much time to open a page but most of the time
does not open any page or central admin site and shows following error
event i go to logs folder under 15 hive but nothing useful found please tell me why it takes about 10-12 minutes to open a site or any page and then shows above shown error.This usually happens if you are low on hardware requirements. Check whether your machine confirms with the required software and hardware requirements.
https://technet.microsoft.com/en-us/library/cc262485.aspx
http://sharepoint.stackexchange.com/questions/58370/minimum-real-world-system-requirements-for-sharepoint-2013
Please remember to up-vote or mark the reply as answer if you find it helpful. -
Creative Cloud is taking too much time to load and is not downloading the one month trial for Photoshop I just paid money for.
stop the download if it's stalled, and restart your download.
-
Hi, from the last two days my iphone( iphone 4 with ios 5) have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.
The Basic Troubleshooting Steps are:
Restart... Reset... Restore...
iPhone Reset
http://support.apple.com/kb/ht1430
Try this First... You will Not Lose Any Data...
Turn the Phone Off...
Press and Hold the Sleep/Wake Button and the Home Button at the Same Time...
Wait for the Apple logo to Appear and then Disappear...
Usually takes about 15 - 20 Seconds... ( But can take Longer...)
Release the Buttons...
Turn the Phone On...
If that does not help... See Here:
Backing up, Updating and Restoring
http://support.apple.com/kb/HT1414 -
SAP GUI taking too much time to open transactions
Hi guys,
i have done system copy from Production to Quality server completely.
after that i started SAP on Quality server, it is taking too much time for opening SAP transactions (it is going in compilation mode).
i started SGEN, but it is giving time_out errors. please help me on this issue.
MY hardware details on quality server
operating system : SuSE Linux 10 SP2
Database : 10.2.0.2
SAP : ECC 6.0 SR2
RAM size : 8 GB
Hard disk space : 500 GB
swap space : 16 GB.
regards
RameshHi,
>i started SGEN, but it is giving time_out errors. please help me on this issue.
You are supposed to run SGEN as a batch job and so, it should be possible to get time out errors.
I've seen Full SGEN lasting from 3 hours on high end systems up to 8 full days on PC hardware...
Regards,
Olivier -
Taking too much time in Rules(DTP Schedule run)
Hi,
I am Scheduling the DTP which have filters to minimize the load data.
when i run the DTP it is taking too much time in the "rules" (i can see the DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
here it is consuming too much time in Rules Mapping.
what is the problem and any solutions please...
regards,
sreeHi,
Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
Also check ur DTP batch settings, ie how many no. of background processes used to perform DTP, Job class.
U can find these :
goto DTP, select goto menu and select "Settings for Batch Manager".
In the screen increase no of Processes from 3 to higher no(max 9).
ChaNGE job class to 'A'.
If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
Change these settings and run ur DTP one more time.
U can observer the difference.
Reddy -
Taking too much time incollecting in business content activation
Hi all,
I am collecting business content object for activation. I have selected 0fiAA_cha object,while cllecting in the activation but it is taking too much time and then it asks for source
system authorisation and then throws error maximum run time exceded. i have selected data flow before there.
What can be the reason for it.
Please help..Hi ,
You should also always try and have the latest BI content patch installed but I don't think this is the problem. It seems that there
are alot of objects to collect. Under 'grouping' you can select the option 'only necessary objects', please check if you can
use this option to install the objects that you need from content.
Best Regards,
Des. -
Taking too much time using BufferedWriter to write to a file
Hi,
I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
Thanks in advance.
public String extractItems() throws InternalServerException{
try{
String extractFileName = getExtractFileName();
FileWriter fileWriter = new FileWriter(extractFileName);
BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
System.out.println("Before -1");
CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
System.out.println("After -1");
PrintWriter out = new PrintWriter(bufferWrt);
System.out.println("Before -2");
TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
System.out.println("After -2");
XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
Enumeration allitems = itemSet.allItems();
System.out.println("the batch size : " +itemSet.getBatchSize());
XDForm frm = itemSet.getXDForm();
XDFormProperty[] props = frm.getXDFormProperties();
System.out.println("Before -3");
bufferWrt.newLine();
long startTime ,startTime1 ,startTime2 ,startTime3;
startTime = System.currentTimeMillis();
System.out.println("time here is--before-while : " +startTime);
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --new: " +startTime1);
bufferWrt.write(aRow.toCharArray());
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
startTime3 = System.currentTimeMillis();
System.out.println("time here is--after-while : " +startTime3);
out.close();//added by rosmon to check extra time taken for extraction//
bufferWrt.close();
fileWriter.close();
System.out.println("After -3");
return extractFileName;
catch(Exception e){
e.printStackTrace();
throw new InternalServerException(e.getMessage());Hi fiontan,
Thanks a lot for the response!!!
Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
Does it save any time by using the print() method??
The place where the delay is occurring is the wile loop shown below:
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
bufferWrt.write(aRow.toCharArray());
out.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
thanks in advance -
Taking too much time (1min) to connect to database
Hi,
I have oracle 10.2 and 10g application server.
Its taking too much time to connect to database through application (on browser). The connection through sqlplus is fine.
Please share your experience.
Regards,
NaseerDear AnaTech,
i am going to ask not related this question which already you answered i am going to ask you about that how to connect forms6i and Developer 10g with OracleAS.
i have installed and working Developer Suite 10g Ver. 10.1.2 and also Form Builder 6i. On my other machine i installed and working Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 and also on the database machine i installed Oracle Enterprise Manager 10g Application Server Control 10.1.2.0.2.
my database conectivity with Developer suite Forms and Reports and also Form6i and Reports6i are working fine. no problem.
now the 1 question of mine is that when i try to run form6i through run from web i got this error. FRM-99999: error 18121 occured see the release not.
this and the main question of mine is this that how can i control my OracleAS 10g with forms because basically the functionality of OracleAS is Mid-Tier but i am not utilizing the Mid-tier i am using here Two-tier Envrionment even i installed 3-Tier Environment so tell me how i utilize it with 3-Tier..
I hope you don't mind that i ask this question here and also if you give me you email then we can discuss this in detail and i can be helpful of your great expertise. i also know and utilize my 3-tier real envrionment.
Waiting for your great response.
Regards,
K.J.J.C
Maybe you are looking for
-
I have two identical, but independant test stations, both feeding data back to a Data Acquisition Computer running LabView 6.1. Everything is duplicated at the computer as well, with two E-series multifunction I/O cards (one for each test station) an
-
When I purchased my I-phone 4 several years ago, our email service provider was Everest. We no longer use Everest and I can no longer access that email address, but my I-phone is still attached to that email address, so I can not unlock this I-phone
-
I have a new macbook pro retina display with an hdmi port built in. I have connected it to my hd tv and have not been able to recieve any sound. I have gone through the settings under sound and changed the output from built in speakers to hdmi. Howev
-
Queer mini-DVI Connector on Late 2006 iMac
I have a late 2006 (all white) 24" iMac and a new Mac Mini. The Mac Mini runs under Snow Leopard, and has a keyboard and mouse but no monitor. A couple of days ago, I downloaded Lion from the App Store,Thinking it would come as a downloadable file I
-
Hi, i do this select and i have problem in or statment how i can write it right? Regards SELECT * FROM pa0001 INTO CORRESPONDING FIELDS OF TABLE p1_tab FOR ALL ENTRIES IN p0_tab WHERE pernr = p0_tab-pernr AND persk = 'AC' OR persk EQ 'AP'