Performance issue in loading
Hi All,
We have an DSO (Conslidation DSO )- Cube loading and that load runtime is 7H 45m 19s. Daily its a full load happeneing. And source DSO has multiple loadings to different targets. In DTP there is a filter on some four fields which is used to restriced the loading of data.
1. start of extraction in data package is taking 1 Hours 13 Minutes 9 Sec.
2. Records inserted - 5.951.497.
3. No logic is there in transformation.
4. key fields are the sales document no and sales order item.
5. Secondary index are avalible for DSO but those fields which is used in DTP filter is not there.
How i can optimize the loading runtime?
Thanks,
Siva
Hi Siva,
I could see the extraction job with 1+ hr...is it only for first data package or in all other data packages as well ?
Still you didn't answer my question on need of data drop / refresh on cubes - If you keep an standard DSO, you will always get latest data from cube [with regular DELTA load] Is your data source won't capture deletion images? which info area you are working like sales or inventory or FI?
Since for 126K records, 4 hour job is high than expected runtime. Did you completely drop the indexes? I mean after deleting index NO other job should be there before the DTP? Normally we will completely drop cube content in your case after drop index - technically drop cube contents recreates the secondary indexes on cube - you can see this in the job log.
SO once you drop the cube contents, then drop the indexes on cube in chain.
What type of source DSO you have? In development system, you can try with secondary indexes on source DSO for faster READ access.
Did you try with "DELETE OVERLAP REQUEST" option? Basically it's completely deletes previous requests with same selection..but might keep the dimension tables, So system NOT to re-create/check for new SID/DIMID daily.
Also check which dimension table has large number of dimension values? If it's more than 20%, you can define it as line item dimension...Also number range objects on master data objects helps a little bit...
Please post us your feedback.
Similar Messages
-
Performance issue with loading Proclarity Main Page..
Hi All,
I have Proclarity 6.3 installed on a Windows 2008 R2 OS. The Proclarity Reports was working well until last week. From last few days I am seeing a slow response time in loading the Proclarity Main page.
Loading Proclarity Main page on Internet Explorer 8 is taking 150 seconds and the same Proclarity Main page is loading on Google Chrome in 30 seconds.
Have any of you faced similar issue ?
Already below things explored
1. Clear Cache on PAS Tool
2. Event Viewer, Noticed if there is any error or warning
3. Tried browsing the Proclarity URL from server itself ( still the performance is slow)
4. Memory consumption validated on server side. MSSQLServer was consuming more space. Hence restarted /. After restart also same issue ( with loading main page in IE ONLY)
5. Checked drive space .. All drives has minimum 1.5 GB of free space
6. Cleared up Proclarity Event Logs
The issue is NOT ONLY with loading Main page.. Navigating to any further web pages in Proclarity STANDARD and PROFESSIONAL version is responding VERY slowly.
The only other option, that I am thinking now is RESTARTING THE WINDOWS SERVER. Which may not be a easy deal SINCE ITS A PRODUCTION SERVER.
But the loading of web page on Chrome is 30 seconds and on IE its 150 seconds ( i.e, 5 times more..) .. So does proposing to restart the server makes sense ?
Any help, suggestion , thoughts on what I am facing.. ? Thanks
Regards,
Aravind<b>onInputProcessing for two pages</b>
DATA: event TYPE REF TO if_htmlb_data.
event = cl_htmlb_manager=>get_event_ex( request ).
IF event IS NOT INITIAL AND event->event_name = 'button'.
navigation->goto_page( event->event_server_name ).
ENDIF.
page1.htm
<%@page language="abap" otrTrim="true"%>
<%@extension name="htmlb" prefix="htmlb"%>
<htmlb:content design="design2003">
<htmlb:page>
<htmlb:form>
<htmlb:button text = "next"
design = "NEXT"
onClick = "page2.htm" />
</htmlb:form>
</htmlb:page>
</htmlb:content>
page 2
<%@page language="abap" otrTrim="true"%>
<%@extension name="htmlb" prefix="htmlb"%>
<htmlb:content design="design2003">
<htmlb:page>
<htmlb:form>
<htmlb:button text = "Page 1"
design = "PREVIOUS"
onClick = "page1.htm" />
</htmlb:form>
</htmlb:page>
</htmlb:content>
above will work fine.
another way :
you can define a global variable in your application class and subsquently change its value according to your requirement as the name of the page
and whenever you want to move to some page. jaust assign on onclick event of the button:
navigation->goto_page(global_variable);
where global variable is the variable you have defined.
hope this works for you.
if not reply
regards,
Hemendra -
Performance issue of loading a report
Post Author: satish_nair31
CA Forum: General
Hi,
I am facing performance related problem in some our reports where we have to fetch some 1-2 lakhs of reports for display. Initially we are passing the dataset as the source of data to the report. It takes hell of time to load using this technique. After that for improving the performance we are only passing the filter condition through the report viewer object. This way we have improved a lot but now also its taking some time to load reports which is not acceptable. Is there any way to improve the performance.Post Author: synapsevampire
CA Forum: General
How could you possibly know if you're in the same situation, the original poster didn't include the software or the version being used, whether it's the RDC, etc. Very likely the reason why they received no responses.
They also referenced 2 different methods for retrieving the data, which are you using?
The trick is to make sure that you are passing all of the WHERE conditions in the SQL to the database.
You can probably check this using a database tool, but again, nothing technical in your post about the database either, you certainly shouldn't expect quality help.
-k -
Essbase Studio Performance Issue : Data load into BSO cube
Hello,
Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
Thank you very much.Hello user13695196 ,
(sorry I no longer remember my system number here)
Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
I would recommand:
1. to create in your db source schema a View from your sql statement (these behind your data load rule)
2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
If your query runs longer then you think is acceptable then
a) check DB statistiks,
b) check and/or consider creating indexes
c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
(Don't be shy - a DBa is a human being like you and me :-) )
Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
One hint in addition:
We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
Best Regards
(also to you Torben :-) )
Andre
Edited by: andreml on Mar 17, 2012 4:31 AM -
Power BI performance issue when load large amount of data from database
I need to load data set from my database, which have large amount of data, it will take so many time to initialize data before I can build report, is there any good way to process large amount of data for PowerBI? As I know many people analysis data based
on PowerBI, is there any suggestion for loading large amount of data from database?
Thanks a lot for helpHi Ruixue,
We have made significant performance improvements to Data Load in the February update for the Power BI Designer:
http://blogs.msdn.com/b/powerbi/archive/2015/02/19/6-new-updates-for-the-power-bi-preview-february-2015.aspx
Would you be able to try again and let us know if it's still slow? With the latest improvements, it should take between half and one third of the time that it used to.
Thanks,
M. -
Performance issue while loading 20 million rows
Hi all,
Loading done with 20 million rows into a table (which contains 173columns) using sql loader.
And direct=true has been used.
database is : 9i
OS : Sun OS 5.10 Sun V890, 16 cpu 32GB RAM
Elapsedtime is : 4 hours
But same volume is tried with the following details, into the same table (but columns are increased from 173 to 500)
Database : oracle 10.2.0.4.0 64 bit
OS : Sun Os 5.10 SUN-FIRE V6800, 24 cpu and 54GB RAM
Elapsed time : 6:06 hours
please tell me what could be the problem and how can I minimize the loading time?
Thanks in Advance
AnjiHi burleson,
AWR snap shot as follows.
DB Name DB Id Instance Inst Num Release RAC Host
REVACC 1015743016 REVACC 1 10.2.0.4.0 NO P4061AFMAP
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 342 16-Sep-09 19:30:53 38 2.7
End Snap: 343 16-Sep-09 20:30:07 36 2.6
Elapsed: 59.24 (mins)
DB Time: 195.22 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 9,184M 9,184M Std Block Size: 16K
Shared Pool Size: 992M 992M Log Buffer: 10,560K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 1,097,030.72 354,485,330.18
Logical reads: 23,870.31 7,713,251.27
Block changes: 8,894.16 2,873,984.09
Physical reads: 740.82 239,382.82
Physical writes: 1,003.32 324,203.27
User calls: 28.54 9,223.18
Parses: 242.99 78,517.09
Hard parses: 0.03 8.55
Sorts: 0.60 193.91
Logons: 0.01 3.45
Executes: 215.63 69,676.00
Transactions: 0.00
% Blocks changed per Read: 37.26 Recursive Call %: 99.65
Rollback per transaction %: 0.00 Rows per Sort: 7669.57
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 96.93 In-memory Sort %: 100.00
Library Hit %: 99.97 Soft Parse %: 99.99
Execute to Parse %: -12.69 Latch Hit %: 99.52
Parse CPU to Parse Elapsd %: 91.82 % Non-Parse CPU: 99.64
Shared Pool Statistics Begin End
Memory Usage %: 44.50 44.46
% SQL with executions>1: 85.83 84.78
% Memory for SQL w/exec>1: 87.15 86.65
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 11,632 99.3
db file scattered read 320,585 210 1 1.8 User I/O
SQL*Net more data from client 99,234 164 2 1.4 Network
log file parallel write 5,750 149 26 1.3 System I/O
db file parallel write 144,502 142 1 1.2 System I/O
Time Model Statistics DB/Inst: REVACC/REVACC Snaps: 342-343
-> Total time in database user-calls (DB Time): 11713.1s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
DB CPU 11,631.9 99.3
sql execute elapsed time 3,131.4 26.7
parse time elapsed 53.7 .5
hard parse elapsed time 1.2 .0
connection management call elapsed time 0.3 .0
hard parse (sharing criteria) elapsed time 0.1 .0
sequence load elapsed time 0.1 .0
repeated bind elapsed time 0.0 .0
PL/SQL execution elapsed time 0.0 .0
DB time 11,713.1 N/A
background elapsed time 613.1 N/A
background cpu time 454.5 N/A
Wait Class DB/Inst: REVACC/REVACC Snaps: 342-343
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
User I/O 562,302 .0 304 1 51,118.4
System I/O 166,468 .0 295 2 15,133.5
Network 201,009 .0 165 1 18,273.5
Application 60 .0 5 82 5.5
Configuration 313 .0 4 12 28.5
Other 1,266 .0 3 2 115.1
Concurrency 9,305 .0 2 0 845.9
Commit 60 .0 1 21 5.5
Wait Events DB/Inst: REVACC/REVACC Snaps: 342-343
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
db file scattered read 320,585 .0 210 1 29,144.1
SQL*Net more data from clien 99,234 .0 164 2 9,021.3
log file parallel write 5,750 .0 149 26 522.7
db file parallel write 144,502 .0 142 1 13,136.5
db file sequential read 207,780 .0 93 0 18,889.1
enq: RO - fast object reuse 60 .0 5 82 5.5
write complete waits 135 .0 3 23 12.3
control file parallel write 2,501 .0 3 1 227.4
rdbms ipc reply 189 .0 2 12 17.2
control file sequential read 13,694 .0 2 0 1,244.9
buffer busy waits 8,499 .0 1 0 772.6
log file sync 60 .0 1 21 5.5
direct path write 8,290 .0 1 0 753.6
SQL*Net message to client 100,882 .0 1 0 9,171.1
log file switch completion 13 .0 0 38 1.2
os thread startup 2 .0 0 174 0.2
direct path read 25,646 .0 0 0 2,331.5
log buffer space 161 .0 0 1 14.6
latch free 7 .0 0 24 0.6
latch: object queue header o 180 .0 0 1 16.4
log file single write 11 .0 0 9 1.0
SQL*Net more data to client 893 .0 0 0 81.2
latch: cache buffers chains 767 .0 0 0 69.7
row cache lock 36 .0 0 2 3.3
LGWR wait for redo copy 793 .0 0 0 72.1
reliable message 60 .0 0 1 5.5
latch: cache buffers lru cha 11 .0 0 1 1.0
db file single write 1 .0 0 10 0.1
log file sequential read 10 .0 0 1 0.9
latch: session allocation 18 .0 0 1 1.6
latch: redo writing 4 .0 0 0 0.4
latch: messages 7 .0 0 0 0.6
latch: row cache objects 1 .0 0 0 0.1
latch: checkpoint queue latc 1 .0 0 0 0.1
PX Idle Wait 13,996 100.5 27,482 1964 1,272.4
SQL*Net message from client 100,881 .0 23,912 237 9,171.0
Streams AQ: qmn slave idle w 126 .0 3,442 27316 11.5
Streams AQ: qmn coordinator 255 50.6 3,442 13497 23.2
Streams AQ: waiting for time 1 100.0 545 544885 0.1
class slave wait 2 .0 0 2 0.2
Background Wait Events DB/Inst: REVACC/REVACC Snaps: 342-343
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 5,750 .0 149 26 522.7
db file parallel write 144,502 .0 142 1 13,136.5
control file parallel write 2,501 .0 3 1 227.4
direct path write 8,048 .0 1 0 731.6
control file sequential read 3,983 .0 0 0 362.1
os thread startup 2 .0 0 174 0.2
direct path read 25,646 .0 0 0 2,331.5
log buffer space 161 .0 0 1 14.6
events in waitclass Other 924 .0 0 0 84.0
log file single write 11 .0 0 9 1.0
db file single write 1 .0 0 10 0.1
log file sequential read 10 .0 0 1 0.9
db file sequential read 2 .0 0 5 0.2
latch: cache buffers chains 42 .0 0 0 3.8
latch: redo writing 4 .0 0 0 0.4
buffer busy waits 2 .0 0 0 0.2
rdbms ipc message 77,540 24.8 54,985 709 7,049.1
pmon timer 1,185 100.0 3,456 2916 107.7
Streams AQ: qmn slave idle w 126 .0 3,442 27316 11.5
Streams AQ: qmn coordinator 255 50.6 3,442 13497 23.2
smon timer 112 .0 3,374 30125 10.2
Streams AQ: waiting for time 1 100.0 545 544885 0.1
Operating System Statistics DB/Inst: REVACC/REVACC Snaps: 342-343
Statistic Total
AVG_BUSY_TIME 161,850
AVG_IDLE_TIME 187,011
AVG_IOWAIT_TIME 0
AVG_SYS_TIME 9,653
AVG_USER_TIME 152,083
BUSY_TIME 3,887,080
IDLE_TIME 4,491,132
IOWAIT_TIME 0
SYS_TIME 234,325
USER_TIME 3,652,755
LOAD 11
OS_CPU_WAIT_TIME 9,700
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 57,204,736
VM_OUT_BYTES 0
PHYSICAL_MEMORY_BYTES 56,895,045,632
NUM_CPUS 24
Service Statistics DB/Inst: REVACC/REVACC Snaps: 342-343
-> ordered by DB Time
Physical Logical
Service Name DB Time (s) DB CPU (s) Reads Reads
SYS$USERS 11,931.9 11,848.9 2,608,446 ##########
REVACC 0.0 0.0 0 0
SYS$BACKGROUND 0.0 0.0 25,685 34,096
Service Wait Class Stats DB/Inst: REVACC/REVACC Snaps: 342-343
-> Wait Class info for services in the Service Statistics section.
-> Total Waits and Time Waited displayed for the following wait
classes: User I/O, Concurrency, Administrative, Network
-> Time Waited (Wt Time) in centisecond (100th of a second)
Service Name
User I/O User I/O Concurcy Concurcy Admin Admin Network Network
Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time Total Wts Wt Time
SYS$USERS
525903 24088 9259 161 0 0 201012 16511
REVACC
0 0 0 0 0 0 0 0
SYS$BACKGROUND
36410 6292 46 35 0 0 0 0
I will provide entire report.. if it is not sufficient
Thanks
Anji
Edited by: user11907415 on Sep 17, 2009 6:39 AM -
Hi all,
I'm experiencing some performance issues when loading a page for the first time. The problem is caused when updating a css file of my own custom theme. When I access a page of the application for the first time, this produces a 404 error but the file truly exists. If I update the page via F5 or I access a different page it loads correctly with times more than acceptable. In contrast, if I update the page via Ctrl+F5 (to clear cache) the same issue occurs.
I performed the same test with another test application with a default theme and working properly.
The configuration I have is Apex 4.1.1 in a Oracle 11.2 and Apex Listener 1.1.4 in a WebLogic server. The file i.war does not pack the images directory directly, this file contains a configuration file which defines a virtual directory, in this way I can modify files on the fly without restarting the listener. If I get this problem and restart the listener everything works right. The configuration file is:
+<weblogic-web-app xmlns="http://www.bea.com/ns/weblogic/weblogic-web-app">+
+<!-- This element specifies the context path the static resources are served from -->+
+<context-root>/i</context-root>+
+<virtual-directory-mapping>+
+<!-- This element specifies the location on disk where the static resources are located -->+
+<local-path>/mnt/fs_servicios/apex/datos/usuarios/apex/applications/apex/images</local-path>+
+<url-pattern>*</url-pattern>+
+</virtual-directory-mapping>+
+</weblogic-web-app>+
What could be the cause of the problem?
Edited by: RideTheStorm on Jan 17, 2013 9:57 AMIn the weblogic log appears this entry:
<Jan 17, 2013 10:06:36 AM CET> <Error> <HTTP> <BEA-101019> <[ServletContext@406840767[app:i module:i.war path:/i spec-version:null]] Servlet failed with IOException
java.io.IOException: failed to read '199' bytes from InputStream; clen: -1 remaining: 199 count: 6743
at weblogic.servlet.internal.ChunkOutput.writeStream(ChunkOutput.java:417)
at weblogic.servlet.internal.ChunkOutputWrapper.writeStream(ChunkOutputWrapper.java:178)
at weblogic.servlet.internal.ServletOutputStreamImpl.writeStream(ServletOutputStreamImpl.java:520)
at weblogic.servlet.internal.ByteRangeHandler.write(ByteRangeHandler.java:103)
at weblogic.servlet.internal.ByteRangeHandler$SingleByteRangeHandler.sendRangeData(ByteRangeHandler.java:407)
Truncated. see log file for complete stacktrace
I will talk with the people who manage the server to find a solution for this issue. -
Jsp performance issue on Safari
Hi,
We are facing performance issues while loading a jsp page in Safari browser on Macintosh.It takes around 2 minutes to load.
The same page used to take around 5 seconds on Netscape on Macintosh.
Have anyone faced a similar issue?
I'm new to jsp, so could you please provide some tips on how we can debug what is causing the issue and tune this?
Thanks,
MiniOn many projects I've worked on, including non-JDev, a general user interface rule was that you don't use an LOV if there's much more than 20-30 items. Think of this from a USER standpoint.
Note also that in Swing/DACF the combobox does NOT allow you to type the letters "starting" and automatically jump to the right place.. ( i.e. Enter WI in a state list doesn't take you to WISCONSIN ). This is a Sun JDK issue. ( Or has it changed in 1.3 in some manner... or is there a property set to allow this? )
As such, you've more a application design issue than a performance issue.
Good Luck
null -
Performance issues with class loader on Windows server
We are observing some performance issues in our application. We are Using weblogic 11g with Java6 on a windows 2003 server
The thread dumps indicate many threads are waiting in queue for the native file methods:
"[ACTIVE] ExecuteThread: '106' for queue: 'weblogic.kernel.Default (self-tuning)'" RUNNABLE
java.io.WinNTFileSystem.getBooleanAttributes(Native Method)
java.io.File.exists(Unknown Source)
weblogic.utils.classloaders.ClasspathClassFinder.getFileSource(ClasspathClassFinder.java:398)
weblogic.utils.classloaders.ClasspathClassFinder.getSourcesInternal(ClasspathClassFinder.java:347)
weblogic.utils.classloaders.ClasspathClassFinder.getSource(ClasspathClassFinder.java:316)
weblogic.application.io.ManifestFinder.getSource(ManifestFinder.java:75)
weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
weblogic.application.utils.CompositeWebAppFinder.getSource(CompositeWebAppFinder.java:71)
weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
weblogic.utils.classloaders.CodeGenClassFinder.getSource(CodeGenClassFinder.java:33)
weblogic.utils.classloaders.GenericClassLoader.findResource(GenericClassLoader.java:210)
weblogic.utils.classloaders.GenericClassLoader.getResourceInternal(GenericClassLoader.java:160)
weblogic.utils.classloaders.GenericClassLoader.getResource(GenericClassLoader.java:182)
java.lang.ClassLoader.getResourceAsStream(Unknown Source)
javax.xml.parsers.SecuritySupport$4.run(Unknown Source)
java.security.AccessController.doPrivileged(Native Method)
javax.xml.parsers.SecuritySupport.getResourceAsStream(Unknown Source)
javax.xml.parsers.FactoryFinder.findJarServiceProvider(Unknown Source)
javax.xml.parsers.FactoryFinder.find(Unknown Source)
javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source)
org.ajax4jsf.context.ResponseWriterContentHandler.<init>(ResponseWriterContentHandler.java:48)
org.ajax4jsf.context.ViewResources$HeadResponseWriter.<init>(ViewResources.java:259)
org.ajax4jsf.context.ViewResources.processHeadResources(ViewResources.java:445)
org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:193)
org.apache.myfaces.lifecycle.RenderResponseExecutor.execute(RenderResponseExecutor.java:41)
org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:140)
On googling this seems to be an issue with java file handling on windows servers and I couldn't find a solution yet. Any recommendation or pointer is appreciatedHi shubhu,
I just analyzed your partial Thread Dump data, the problem is that the ajax4jsf framework ResponseWriterContentHandler triggers internally a new instance of the DocumentBuilderFactory; every time; triggering heavy IO contention because of Class loader / JAR file search operations.
Too many of these IO operations under heavy load will create excessive contention and severe performance degradation; regardless of the OS you are running your JVM on.
Please review the link below and see if this is related to your problem.. This is a known issue in JBOSS JIRA when using RichFaces / ajaxJSF.
https://issues.jboss.org/browse/JBPAPP-6166
Regards,
P-H
http://javaeesupportpatterns.blogspot.com/ -
Performance issues with Planning data load & Agg in 11.1.2.3.500
We recently upgraded from 11.1.1.3 to 11.1.2.3. Post upgrade we face performance issues with one of our Planning job (eg: Job E). It takes 3x the time to complete in our new environment (11.1.2.3) when compared to the old one (11.1.1.3). This job loads then actual data and does the aggregation. The pattern which we noticed is , if we run a restructure on the application and execute this job immediately it gets completed as the same time as 11.1.1.3. However, in current production (11.1.1.3) the job runs in the following sequence Job A->Job B-> Job C->Job D->Job E and it completes on time, but if we do the same test in 11.1.2.3 in the above sequence it take 3x the time . We dont have a window to restructure the application to before running Job E every time in Prod. Specs of the new Env is much higher than the old one.
We have Essbase clustering (MS active/passive) in the new environment and the files are stored in the SAN drive. Could this be because of this? has any one faced performance issues in the clustered environment?Do you have exactly same Essbase config settings and calculations performing AGG ? . Remember something very small like UPDATECALC ON/OFF can make a BIG difference in timing..
-
Performance issues with data warehouse loads
We have performance issues with our data warehouse load ETL process. I have run
analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
ScottHi,
you should analyze the db after you have loaded the tables.
Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
If yes:
make sure your sequence caches (alter sequence s cache 10000)
Drop all unneeded indexes while loading and disable trigger if possible.
How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
Is it possible using a direct load? Or do you already direct load?
Dim -
Accrual Reconciliation Load Run report performance issue.
we have significant performance issues when running accrual reconciliation load run report. we had to cancel after have it run for a day. any idea on how to resolve it?
We experienced similar issue. As the runtime of this report depends on the input parameters. Remember, your first run of this report going to take significant amount of time and the subsequent runs will be much shorter.
But w had to apply the patches referred in the MOS article to resolve the performance issue.
Accrual Reconciliation Load Run Has Slow Performance [ID 1490578.1]
Thanks,
Sunthar.... -
QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES
WHAT ARE QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
WHAT ARE DATALOADING PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
WILL REWARD FULL POINT S
REGARDS
GURUBW Back end
Some Tips -
1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 Background Processing Job Management to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 ABAP/4 Run-time Analysis and then run the analysis for the transaction code RSA3 Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW BW IMG Menu on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
Hope it Helps
Chetan
@CP.. -
Generic extraction loading and performances issues
hi,
any one can give the details about the generic extraction loading as well as performance issues.
advance thanks
regards
praveenHi,
when there is no suitable business content datasource we go for creating generic data source.
by using generic data source we can extract data present in single or multiple tables.
If the data is present in the single table we go for generic data source extracting from table.
If the data that is to be extracted is present in multiple tables and the relation between the tables is one to one and if there is a common field in both the tables we create a view on these tables and create a generic data source extracting from view.
if you want to extract data from different tables and there is no common field we create a info set on these tables and create a generic datasource extracting from query.
if you want to extarc the data from different tables and the relation is one to many or many to many we create a generic data source from function module.
If we extarct from function module ,at run time it has to execute the code and brings the data to BW.so it degrades the loading performance.
regards, -
Hi All,
We have used WSRP Portlet in Webcenter Portal Page. The Portlet is created using JSF Bridge out of ADF Bounded Taskflow.
It is causing Performance issue. Every time static content like js, css and images URLs are downloaded and the URL contain portlet_id and few other dynamic parameters like resource_id, client_id etc.
We are not able to cache these static content as these contains dynamic URL. This ADF Specific images, js and css files are taking longer time to load.
Sample URL:
/<PORTAL_CONTEXT>/resourceproxy/~.clientId~3D-1~26resourceId~3Dresource-url~25253Dhttp~2525253A~2525252F~2525252F<10.*.*.*>~2525253A7020~2525252FportletProdApp~2525252Fafr~2525252Fring_60.gif~26locale~3Den~26checksum~3D3e839bc581d5ce6858c88e7cb3f17d073c0091c7/ring_60.gif
/<PORTAL_CONTEXT>/resourceproxy/~.clientId~3D-1~26resourceId~3Dresource-url~25253Dhttp~2525253A~2525252F~2525252F<10.*.*.*>~2525253A7020~2525252FportletProdApp~2525252Fafr~2525252Fpartition~2525252Fie~2525252Fn~2525252Fdefault~2525252Fopt~2525252Fimagelink-11.1.1.7.0-4251.js~26locale~3Den~26checksum~3Dd00da30a6bfc40b22f7be6d92d5400d107c41d12/imagelink-11.1.1.7.0-4251.js
Technologies Used:
Webcenter Portal PS6
Jdeveloper 11.1.1.7
Please suggest , how this performance issue can be resolved?
Thanks.
Regards,
DigeshStrange...
I can't reproduce this because i have issues with creating portlets... If i can solve this issue i will do some testing and see if i can reproduce the issue...
Can you create a new producer with a single portlet that uses a simple taskflow and see if that works?
Are you also using business components in the taskflows or something? You can try removing some parts of the taskflow and test if it works so you can identify the component(s) that causes the issues.
Maybe you are looking for
-
Delivery completed" indicator cannot be set for item 00001
Hi all, I have done a inter company stock transfer order , with respect to that Order I made Outbound delivery and posted GR with respect to outbound delivery. I have done MIRO for PO for all the Item , Now I need to return some material to
-
Is it possible to install Windows 7 32 bit on Maverics?
Hi. I was trying to install my original copy of Windows 7 32 bit on my IMac 21'5 (Late-2012). It did not work because unfortunately Boot Camp 5 supports only 64 bit versions of Windows. I know, that Boot Camp 4 supports 32 bit versions, but is it pos
-
Screen resolution Compaq 2405x under Windows XP
I bought a Compaq 2405x monitor and installed the drivers from CD (my system is Windows XP), The preferred resolution is 1920x1200, but the result after installation gives only other values such as 1024x768 or 1600x1200. In ControlPanel, Properties o
-
Where would I go to get it back to reinstall? I'm worried I won't be able to retrieve it once I uninstall it.
-
I have a Windows 7 Pro PC and I'm using IE 11. I can search my mail on my laptop in Outlook, but I can't on my PC via OWA. Thanks