FBRA performace problem
Dear GURUs
While running TCODE FBRA it is taking long time due to lots off record is available in table BKORM,
The solution is delete all records from BKORM through F.63, is there any effect if we delete all record from Table BKORM.
By which T. code this table updates.
Please guide.
Hello,
What F.63 do will do is, in case if somebody has requested the correspondence through FB12 and does not executed through F.61 it still remains as a request.
You can delete them by using F.63. However, these again will not be available for spooling through F.61
I do not think these are nothing to do with FBRA transaction code. Unless you made thousands of requests and not printed.
Anyway, I would suggest, you need to have a discussion with Basis Consultant to see in how many ways the performance can be improved, like space problem, deletion of old log, archiving documents, refresh the client and memory problem etc.
Regards,
Ravi
Similar Messages
-
Someone please help me. I am sufferring performace problem for my code down below. Highly appraciate if anyone can help me optimize it. It took 4.7 hour to run 150,000 records for 5 runs.
* <p>Title: COllaborative REcommender for SEARCH</p>
* <p>Description: Collaborative recommendation for user by clustering user queries in a community</p>
* <p>Copyright: Copyright (c) 2003</p>
* <p>Company: </p>
* @author Chan Soe Win, Nyein
* @version 1.0.executeQuery(sql);
import java.sql.*;
import java.io.*;
import java.util.*;
import java.util.StringTokenizer;
import java.math.*;
public class ClusterGene {
static String DBUrl = "jdbc:mysql://localhost/corec";
static Connection Conn = null;
static Statement Stmt = null;
static PreparedStatement getDt = null;
static PreparedStatement countCluster = null;
static PreparedStatement insCenter = null;
static PreparedStatement updCenter = null;
static PreparedStatement insCluster = null;
static PreparedStatement selectCenter=null;
static PreparedStatement countClusterMember = null;
static PreparedStatement getClusterMember = null;
static PreparedStatement redefCluster = null;
static int lastClusterID;
public ClusterGene(){
try{
Class.forName("com.mysql.jdbc.Driver").newInstance();
Conn = DriverManager.getConnection(DBUrl);
Conn.setAutoCommit(true);
Stmt = Conn.createStatement();
getDt = Conn.prepareStatement("SELECT Dt FROM tbl_url WHERE id=? AND type=?");
countCluster = Conn.prepareStatement("SELECT DISTINCT cluster_id FROM tbl_center");
insCenter = Conn.prepareStatement("INSERT INTO tbl_center(cluster_id, url_id,type_id,lf) VALUES(?,?,?,?)");
insCluster = Conn.prepareStatement("INSERT INTO tbl_cluster(cluster_id,query_id,sim_query,sim_title,sim_snippet,sim_result,sim_outlink,sim_inlink) VALUES (?,?,?,?,?,?,?,?)");
updCenter = Conn.prepareStatement("UPDATE tbl_center SET lf=? WHERE cluster_id=? AND url_id=? AND type_id=?");
selectCenter = Conn.prepareStatement("SELECT url_id, type_id,lf FROM tbl_center WHERE cluster_id=? AND url_id=? AND type_id=?");
countClusterMember= Conn.prepareStatement("SELECT count(DISTINCT(query_id)) FROM tbl_cluster WHERE cluster_id=?");
getClusterMember = Conn.prepareStatement("SELECT url_id,type_id,lf FROM tbl_lf WHERE query_id=?");
redefCluster = Conn.prepareStatement("UPDATE tbl_center SET lf=? WHERE cluster_id=?");
}catch (Exception e){
public static void main (String args[]){
ClusterGene cluster = new ClusterGene();
int clusterid=0;
double currentlf=0.0;
int termid,typeid;
double lf;
double sim_content,sim_link,sim_hybrid, sim_query,sim_title,sim_snippet,sim_result,sim_outlink,sim_inlink;
ArrayList lfidfArray;
HashMap lfidfmap;
lfidf lfidf = new lfidf();
lfidf member = new lfidf();
lfidf center = new lfidf();
//ResultSet countClusterRS=null;
//int countClusterNo=1;
lfidfArray = new ArrayList();
lfidfmap = new HashMap();
double lfsquare1,lfsquare2,lfsquare3,lfsquare4,lfsquare5,lfsquare6,sumq1q2type1,sumq1q2type2,sumq1q2type3,sumq1q2type4,sumq1q2type5,sumq1q2type6 ;
//select a new lfidfArrayquery
long totaltimetaken = 0;
for (int j=0;j<10;j++){
for (int k=1;k<101;k++){
long start = System.currentTimeMillis();
System.out.print(k+",");
lfsquare1 = 0;
lfsquare2 = 0;
lfsquare3 = 0;
lfsquare4 = 0;
lfsquare5 = 0;
lfsquare6 = 0;
sim_content= 0.0;
sim_link=0.0;
sim_hybrid=0.0;
sim_query =0.0;
sim_title= 0.0;
sim_snippet=0.0;
sim_result=0.0;
sim_outlink=0.0;
sim_inlink=0.0;
try {
//read a case (a query) from query table with term frequency
String sql = "SELECT url_id, type_id, lf FROM tbl_lf WHERE query_id="+k;
//System.out.println(sql);
ResultSet rs= Stmt.executeQuery(sql);
//System.out.println("RS "+rs);
typeid=0;
lfidfmap = new HashMap();
while(rs.next())
lfidf=new lfidf();
termid = rs.getInt("url_id");
typeid =rs.getInt("type_id");
lfidf.query_id = k;
lfidf.url_id= termid;
lfidf.type_id= typeid;
lf = rs.getDouble("lf");
getDt.setInt(1, termid);
getDt.setInt(2, typeid);
ResultSet dtrs = getDt.executeQuery();
dtrs.next();
//System.out.println(lf+"DT >>>"+dtrs.getDouble("Dt"));
lfidf.lfidf = log2(1+lf)*log2(10000/dtrs.getDouble("Dt"));
Integer key = new Integer(termid);
//System.out.println("Type >> "+typeid);
//lfidfArray.add(lfidf);
lfidfmap.put(key,lfidf);
//System.out.print("Key "+key+" \tLFIDF :: ");
//System.out.println(((lfidf) lfidfmap.get(key)).lfidf);
//System.out.println(typeid);
if (typeid ==1){
lfsquare1 += lfidf.lfidf *lfidf.lfidf ;
//System.out.println("lfsquare1 "+lfsquare1);
}else if (typeid==2){
lfsquare2 += lfidf.lfidf *lfidf.lfidf ;
//System.out.println("lfsquare2 "+lfsquare2);
}else if (typeid ==3){
lfsquare3 += lfidf.lfidf *lfidf.lfidf ;
//System.out.println("lfsquare3 "+lfsquare3);
} else if (typeid ==4){
lfsquare4 += lfidf.lfidf *lfidf.lfidf ;
//System.out.println("lfsquare1 "+lfsquare1);
}else if (typeid==5){
lfsquare5 += lfidf.lfidf *lfidf.lfidf ;
//System.out.println("lfsquare2 "+lfsquare2);
}else if (typeid ==6){
lfsquare6 += lfidf.lfidf *lfidf.lfidf ;
//System.out.println("lfsquare3 "+lfsquare3);
}// end while(rs.next())
//compare with all existing cluster centers
boolean newseed = true;
try{
ResultSet clcountrs= countCluster.executeQuery();
while (clcountrs.next())
//select a center
clusterid = clcountrs.getInt("cluster_id");
lastClusterID = clusterid;
//select a cluster center
lfidf clfidf =new lfidf();
int cltermid,cltypeid;
double cllf, cllfsquare1,cllfsquare2,cllfsquare3,cllfsquare4,cllfsquare5,cllfsquare6 ,cllfidf;
sumq1q2type1=0;
sumq1q2type2=0;
sumq1q2type3=0;
sumq1q2type4=0;
sumq1q2type5=0;
sumq1q2type6=0;
cllfsquare1=0;
cllfsquare2=0;
cllfsquare3=0;
cllfsquare4=0;
cllfsquare5=0;
cllfsquare6=0;
try{
String clqry = "SELECT url_id, type_id,lf FROM tbl_center WHERE cluster_id="+clusterid;
//System.out.println(clqry);
ResultSet clrs = Stmt.executeQuery(clqry);
//System.out.println(clqry);
//System.out.println(clrs);
while (clrs.next())
cltermid= 0;
cltypeid= 0;
clfidf =new lfidf();
cltermid= clrs.getInt("url_id");
cltypeid= clrs.getInt("type_id");
cllfidf = clrs.getDouble("lf");
getDt.setInt(1, cltermid);
getDt.setInt(2, cltypeid);
ResultSet cldtrs = getDt.executeQuery();
cldtrs.next();
Integer tindex = new Integer(cltermid);
if(lfidfmap.containsKey(tindex)){
clfidf = (lfidf) lfidfmap.get(tindex);
if (!clfidf.equals(null)){
if ((clfidf.type_id ==1)){
sumq1q2type1 += cllfidf * clfidf.lfidf;
cllfsquare1 += cllfidf * cllfidf ;
if ((clfidf.type_id ==2)){
sumq1q2type2 += cllfidf * clfidf.lfidf;
cllfsquare2 += cllfidf * cllfidf ;
if ((clfidf.type_id ==3)){
sumq1q2type3 += cllfidf * clfidf.lfidf;
cllfsquare3 += cllfidf * cllfidf ;
if ((clfidf.type_id ==4) ){
sumq1q2type4 += cllfidf * clfidf.lfidf;
cllfsquare4 += cllfidf * cllfidf ;
if ((clfidf.type_id ==5)){
sumq1q2type5 += cllfidf * clfidf.lfidf;
cllfsquare5 += cllfidf * cllfidf ;
if ((clfidf.type_id ==6)){
sumq1q2type6 += cllfidf * clfidf.lfidf;
cllfsquare6 += cllfidf * cllfidf ;
}// if (!clfidf.equals(null))
}catch (Exception e){
if (cllfsquare1 >0 && lfsquare1>0){
sim_query = sumq1q2type1/Math.sqrt(cllfsquare1*lfsquare1);
}else {
sim_query = 0.0;
if (cllfsquare2 >0 && lfsquare2>0){
sim_title = sumq1q2type2/Math.sqrt(cllfsquare2*lfsquare2);
}else {
sim_title =0.0;
if (cllfsquare3 >0 && lfsquare3>0){
sim_snippet = sumq1q2type3/ Math.sqrt(cllfsquare3*lfsquare3);
}else {
sim_snippet=0.0;
if (cllfsquare4 >0 && lfsquare4>0){
sim_inlink = sumq1q2type4/ Math.sqrt(cllfsquare4*lfsquare4);
}else {
sim_inlink=0.0;
if (cllfsquare5 >0 && lfsquare5>0){
sim_outlink = sumq1q2type5/ Math.sqrt(cllfsquare5*lfsquare5);
}else {
sim_outlink=0.0;
if (cllfsquare6 >0 && lfsquare6>0){
sim_result = sumq1q2type6/ Math.sqrt(cllfsquare6*lfsquare6);
}else {
sim_result=0.0;
sim_content = ((1.0/3.0)* sim_query) +((1.0/3.0) * sim_title)+((1.0/3.0) * sim_snippet);
sim_link = ((1.0/3.0)* sim_result) +((1.0/3.0) * sim_outlink)+((1.0/3.0) * sim_inlink);
sim_hybrid =(0.75 * sim_content) + (0.25 *sim_link);
if (sim_hybrid>0.25){
newseed = false;
//insCluster.setInt(1,clusterid);
//insCluster.setInt(2,k);
//insCluster.setDouble(3,sim_query);
//insCluster.setDouble(4,sim_title);
//insCluster.setDouble(5,sim_snippet);
//insCluster.setDouble(6,sim_result);
//insCluster.setDouble(7,sim_outlink);
//insCluster.setDouble(8,sim_inlink);
//try{
//insCluster.executeUpdate();
//}catch (Exception e){
//System.out.println("3 >>>>>>"+ (System.currentTimeMillis()-startTime));
//countClusterMember.setInt(1,clusterid);
//countClusterRS = countClusterMember.executeQuery();
//countClusterRS.next();
//countClusterNo=countClusterRS.getInt(1);
//System.out.println("Cluster :: "+clusterid);
//if the number of member change, redefine the cluster center
//String sqlterm = "SELECT url_id,type_id,lf FROM tbl_lf WHERE query_id="+k;
// ResultSet rsterm = Stmt.executeQuery(sqlterm);
for (Iterator f = lfidfmap.keySet().iterator(); f.hasNext();)
Object key = f.next();
member = (lfidf) lfidfmap.get(key);
getDt.setInt(1, member.url_id);
getDt.setInt(2, member.type_id);
ResultSet centrs = getDt.executeQuery();
centrs.next();
double centerlfidf= log2(1+ member.lfidf)*log2(10000/centrs.getDouble("Dt"));
selectCenter.setInt(1,clusterid);
selectCenter.setInt(2, member.url_id);
selectCenter.setInt(3, member.type_id);
ResultSet selectRS = selectCenter.executeQuery();
if (selectRS.next())
currentlf = selectRS.getDouble("lf")+centerlfidf;
try{
updCenter.setDouble(1,currentlf/2);
updCenter.setInt(2,clusterid);
updCenter.setInt(3, member.url_id);
updCenter.setInt(4, member.type_id);
updCenter.executeUpdate();
}catch (Exception e){
}else {
try{
insCenter.setInt(1,clusterid);
insCenter.setInt(2, member.url_id);
insCenter.setInt(3, member.type_id);
insCenter.setDouble(4, centerlfidf);
insCenter.executeUpdate();
}catch (Exception e){
}// end of for iterator
}//end of if (sim_hybrid > 0.1)
}// end of while (clrs.next())
//new case does not fit into any of the existing cluster, then it become a new cluster itself
if (newseed = true){
String lfsql = "SELECT query_id, url_id,type_id,lf FROM tbl_lf WHERE query_id="+k;
//System.out.println("New Seed");
//System.out.println(sql);
ResultSet lfrs = Stmt.executeQuery(lfsql);
int count=0;
double centerlfidf=0.0;
while (lfrs.next()){
count++;
getDt.setInt(1, lfrs.getInt("url_id"));
getDt.setInt(2, lfrs.getInt("type_id"));
ResultSet centerrs = getDt.executeQuery();
centerrs.next();
centerlfidf= log2(1+ lfrs.getDouble("lf"))*log2(10000/centerrs.getDouble("Dt"));
if (centerlfidf >0){
insCenter.setInt(1,k);
insCenter.setInt(2, lfrs.getInt("url_id"));
insCenter.setInt(3, lfrs.getInt("type_id"));
insCenter.setDouble(4, centerlfidf);
insCenter.execute();
newseed=false;
}catch (Exception e){
}catch(Exception e){
//System.out.println(k+" takes "+ ((System.currentTimeMillis()-start)/1000) + " seconds ");
totaltimetaken +=((System.currentTimeMillis()-start)/1000);
}//end of for loop k
System.out.println("Total time taken is "+ totaltimetaken +" seconds");
private static double log2(double d) {
return Math.log(d)/Math.log(2.0);I haven't tried understanding the flow of your program, just giving you database & JDBC tips....
1. Try creating database indexes for the fields in your SQL where clauses... may help improve querying.
2. If you've got a huge number of SQL insert statements then you should look at auto-commit.
By default, the database connection sets "auto-commit" to on.
So, for every single SQL insert or update, a database commit is performed.
If you're inserting/updating a huge batch of records, try setting auto-commit to off, then explicitely call
"commit()" after maybe 50 inserts....
We got a serious performance gain from that tip recently.
As mentioned, I didn't look at the flow/logic of your code, just the DB stuff.
regards,
Owen -
I experience in photoshop, illustrator and after effects a similar performance problems. The same file on the same PC work smoothly in CS5 and in CC the system gets slow and the software sometimes crashes. Adobe bridhe CC does not work ... only my old Bridge CS5.
Noel Carboni from the forum answered in the forum... but I did not find a ways to answer back... Is the text below the "Photoshop CC's Help - System Info" Noel Carboni asked for?
Sincerly
xxxxxxxx
Adobe Photoshop Version: 14.2.1 (14.2.1 20140207.r.570 2014/02/07:23:00:00) x64
Betriebssystem: Windows 7 64-Bit
Version: 6.1 Service Pack 1
Systemarchitektur: Intel CPU-Familie:6, Modell:7, Stepping:7mit MMX, SSE (ganze Zahl), SSE FP, SSE2, SSE3, SSE4.1
Physischer Prozessor: 4
Prozessor-Taktfrequenz: 2833 MHz
Eingebauter Speicher: 8191 MB
Freier Speicher: 5615 MB
Für Photoshop verfügbarer Arbeitsspeicher: 7178 MB
Von Photoshop verwendeter Arbeitsspeicher: 60 %
Bildkachelgröße: 1024 KB
Bildcache: 4
Schriftvorschau: Mittel
TextComposer: Lateinisch
Anzeige: 1
Anzeigebegrenzungen: oben= 0, links= 0, unten= 1200, rechts= 1920
Anzeige: 2
Anzeigebegrenzungen: oben= 0, links= 1920, unten= 1200, rechts= 3840
Mit OpenGL zeichnen: Aktiviert.
OpenGL – alte GPUs zulassen: Nicht erkannt.
OpenGL-Zeichnungsmodus: Erweitert
OpenGL – normalen Modus zulassen: Wahr.
OpenGL – erweiterten Modus zulassen: Wahr.
AIFCoreInitialized=1
AIFOGLInitialized=1
OGLContextCreated=1
NumGPUs=1
gpu[0].OGLVersion="3.0"
gpu[0].MemoryMB=1023
gpu[0].RectTextureSize=16384
gpu[0].Renderer="Quadro 2000/PCIe/SSE2"
gpu[0].RendererID=3544
gpu[0].Vendor="NVIDIA Corporation"
gpu[0].VendorID=4318
gpu[0].HasNPOTSupport=1
gpu[0].DriverVersion="9.18.13.697"
gpu[0].Driver="nvd3dumx.dll,nvwgf2umx.dll,nvwgf2umx.dll,nvd3dum,nvwgf2um,nvwgf2um"
gpu[0].DriverDate="20121002000000.000000-000"
gpu[0].CompileProgramGLSL=1
gpu[0].TestFrameBuffer=1
gpu[0].OCLPresent=1
gpu[0].OCLVersion="1.1"
gpu[0].CUDASupported=1
gpu[0].CUDAVersion="4.2.1"
gpu[0].OCLBandwidth=3.21234e+010
gpu[0].glGetString[GL_SHADING_LANGUAGE_VERSION]="1.30 NVIDIA via Cg compiler"
gpu[0].glGetProgramivARB[GL_FRAGMENT_PROGRAM_ARB][GL_MAX_PROGRAM_INSTRUCTIONS_ARB]=[65536 ]
gpu[0].glGetIntegerv[GL_MAX_TEXTURE_UNITS]=[4]
gpu[0].glGetIntegerv[GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS]=[160]
gpu[0].glGetIntegerv[GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS]=[32]
gpu[0].glGetIntegerv[GL_MAX_TEXTURE_IMAGE_UNITS]=[32]
gpu[0].glGetIntegerv[GL_MAX_DRAW_BUFFERS]=[8]
gpu[0].glGetIntegerv[GL_MAX_VERTEX_UNIFORM_COMPONENTS]=[4096]
gpu[0].glGetIntegerv[GL_MAX_FRAGMENT_UNIFORM_COMPONENTS]=[2048]
gpu[0].glGetIntegerv[GL_MAX_VARYING_FLOATS]=[124]
gpu[0].glGetIntegerv[GL_MAX_VERTEX_ATTRIBS]=[16]
gpu[0].extension[AIF::OGL::GL_ARB_VERTEX_PROGRAM]=1
gpu[0].extension[AIF::OGL::GL_ARB_FRAGMENT_PROGRAM]=1
gpu[0].extension[AIF::OGL::GL_ARB_VERTEX_SHADER]=1
gpu[0].extension[AIF::OGL::GL_ARB_FRAGMENT_SHADER]=1
gpu[0].extension[AIF::OGL::GL_EXT_FRAMEBUFFER_OBJECT]=1
gpu[0].extension[AIF::OGL::GL_ARB_TEXTURE_RECTANGLE]=1
gpu[0].extension[AIF::OGL::GL_ARB_TEXTURE_FLOAT]=1
gpu[0].extension[AIF::OGL::GL_ARB_OCCLUSION_QUERY]=1
gpu[0].extension[AIF::OGL::GL_ARB_VERTEX_BUFFER_OBJECT]=1
gpu[0].extension[AIF::OGL::GL_ARB_SHADER_TEXTURE_LOD]=1
Lizenztyp: Abonnement
Seriennummer: 94070096362932215641
Anwendungsordner: C:\Program Files\Adobe\Adobe Photoshop CC (64 Bit)\
Pfad für temporäre Dateien: C:\Users\janine\AppData\Local\Temp\
Der virtuelle Speicher von Photoshop hat asynchronen E/A aktiviert
Arbeitsvolume(s):
D:\, 1,36 T, 635,2 GB frei
Ordner für erforderliche Zusatzmodule: C:\Program Files\Adobe\Adobe Photoshop CC (64 Bit)\Required\Plug-Ins\
Primärer Zusatzmodul-Ordner: C:\Program Files\Adobe\Adobe Photoshop CC (64 Bit)\Plug-ins\
Installierte Komponenten
ACE.dll ACE 2013/10/29-11:47:16 79.548223 79.548223
adbeape.dll Adobe APE 2013/02/04-09:52:32 0.1160850 0.1160850
AdobeLinguistic.dll Adobe Linguisitc Library 7.0.0
AdobeOwl.dll Adobe Owl 2013/10/25-12:15:59 5.0.24 79.547804
AdobePDFL.dll PDFL 2013/10/29-11:47:16 79.508720 79.508720
AdobePIP.dll Adobe Product Improvement Program 7.0.0.1786
AdobeXMP.dll Adobe XMP Core 2013/10/29-11:47:16 79.154911 79.154911
AdobeXMPFiles.dll Adobe XMP Files 2013/10/29-11:47:16 79.154911 79.154911
AdobeXMPScript.dll Adobe XMP Script 2013/10/29-11:47:16 79.154911 79.154911
adobe_caps.dll Adobe CAPS 7,0,0,21
AGM.dll AGM 2013/10/29-11:47:16 79.548223 79.548223
ahclient.dll AdobeHelp Dynamic Link Library 1,8,0,31
aif_core.dll AIF 5.0 79.534508
aif_ocl.dll AIF 5.0 79.534508
aif_ogl.dll AIF 5.0 79.534508
amtlib.dll AMTLib (64 Bit) 7.0.0.249 BuildVersion: 7.0; BuildDate: Thu Nov 14 2013 15:55:50) 1.000000
ARE.dll ARE 2013/10/29-11:47:16 79.548223 79.548223
AXE8SharedExpat.dll AXE8SharedExpat 2011/12/16-15:10:49 66.26830 66.26830
AXEDOMCore.dll AXEDOMCore 2011/12/16-15:10:49 66.26830 66.26830
Bib.dll BIB 2013/10/29-11:47:16 79.548223 79.548223
BIBUtils.dll BIBUtils 2013/10/29-11:47:16 79.548223 79.548223
boost_date_time.dll DVA Product 7.0.0
boost_signals.dll DVA Product 7.0.0
boost_system.dll DVA Product 7.0.0
boost_threads.dll DVA Product 7.0.0
cg.dll NVIDIA Cg Runtime 3.0.00007
cgGL.dll NVIDIA Cg Runtime 3.0.00007
CIT.dll Adobe CIT 2.1.6.30929 2.1.6.30929
CITThreading.dll Adobe CITThreading 2.1.6.30929 2.1.6.30929
CoolType.dll CoolType 2013/10/29-11:47:16 79.548223 79.548223
dvaaudiodevice.dll DVA Product 7.0.0
dvacore.dll DVA Product 7.0.0
dvamarshal.dll DVA Product 7.0.0
dvamediatypes.dll DVA Product 7.0.0
dvaplayer.dll DVA Product 7.0.0
dvatransport.dll DVA Product 7.0.0
dvaunittesting.dll DVA Product 7.0.0
dynamiclink.dll DVA Product 7.0.0
ExtendScript.dll ExtendScript 2013/10/30-13:12:12 79.546835 79.546835
FileInfo.dll Adobe XMP FileInfo 2013/10/25-03:51:33 79.154511 79.154511
filter_graph.dll AIF 5.0 79.534508
icucnv40.dll International Components for Unicode 2011/11/15-16:30:22 Build gtlib_3.0.16615
icudt40.dll International Components for Unicode 2011/11/15-16:30:22 Build gtlib_3.0.16615
imslib.dll IMSLib DLL 7.0.0.145
JP2KLib.dll JP2KLib 2013/10/29-11:47:16 79.248139 79.248139
libifcoremd.dll Intel(r) Visual Fortran Compiler 10.0 (Update A)
libiomp5md.dll Intel(R) OMP Runtime Library 5.0
libmmd.dll Intel(r) C Compiler, Intel(r) C++ Compiler, Intel(r) Fortran Compiler 12.0
LogSession.dll LogSession 2.1.2.1785
mediacoreif.dll DVA Product 7.0.0
MPS.dll MPS 2013/10/29-11:47:16 79.535029 79.535029
msvcm80.dll Microsoft® Visual Studio® 2005 8.00.50727.6195
msvcm90.dll Microsoft® Visual Studio® 2008 9.00.30729.1
msvcp100.dll Microsoft® Visual Studio® 2010 10.00.40219.1
msvcp80.dll Microsoft® Visual Studio® 2005 8.00.50727.6195
msvcp90.dll Microsoft® Visual Studio® 2008 9.00.30729.1
msvcr100.dll Microsoft® Visual Studio® 2010 10.00.40219.1
msvcr80.dll Microsoft® Visual Studio® 2005 8.00.50727.6195
msvcr90.dll Microsoft® Visual Studio® 2008 9.00.30729.1
PatchMatch.dll PatchMatch 2013/10/29-11:47:16 79.542390 79.542390
pdfsettings.dll Adobe PDFSettings 1.04
Photoshop.dll Adobe Photoshop CC CC
Plugin.dll Adobe Photoshop CC CC
PlugPlugOwl.dll Adobe(R) CSXS PlugPlugOwl Standard Dll (64 bit) 4.2.0.36
PSArt.dll Adobe Photoshop CC CC
PSViews.dll Adobe Photoshop CC CC
SCCore.dll ScCore 2013/10/30-13:12:12 79.546835 79.546835
ScriptUIFlex.dll ScriptUIFlex 2013/10/30-13:12:12 79.546835 79.546835
svml_dispmd.dll Intel(r) C Compiler, Intel(r) C++ Compiler, Intel(r) Fortran Compiler 12.0
tbb.dll Intel(R) Threading Building Blocks for Windows 4, 1, 2012, 1003
tbbmalloc.dll Intel(R) Threading Building Blocks for Windows 4, 1, 2012, 1003
updaternotifications.dll Adobe Updater Notifications Library 7.0.1.102 (BuildVersion: 1.0; BuildDate: BUILDDATETIME) 7.0.1.102
WRServices.dll WRServices Mon Feb 25 2013 16:09:10 Build 0.19078 0.19078
Erforderliche Zusatzmodule:
3D Studio 14.2.1 (14.2.1 x001)
Adaptive Weitwinkelkorrektur 14.2.1
Aquarell 14.2.1
Arithmetisches Mittel 14.2.1 (14.2.1 x001)
Basrelief 14.2.1
Bereich 14.2.1 (14.2.1 x001)
Bildpaket-Filter 14.2.1 (14.2.1 x001)
Blendenflecke 14.2.1
BMP 14.2.1
Buntglas-Mosaik 14.2.1
Buntstiftschraffur 14.2.1
Camera Raw 8.4.1
Camera Raw-Filter 8.4.1
Chrom 14.2.1
Cineon 14.2.1 (14.2.1 x001)
Collada 14.2.1 (14.2.1 x001)
CompuServe GIF 14.2.1
Conté-Stifte 14.2.1
De-Interlace 14.2.1
Diagonal verwischen 14.2.1
Dicom 14.2.1
Differenz-Wolken 14.2.1 (14.2.1 x001)
Distorsion 14.2.1
Dunkle Malstriche 14.2.1
Durchschnitt berechnen 14.2.1 (14.2.1 x001)
Eazel Acquire 14.2.1 (14.2.1 x001)
Entropie 14.2.1 (14.2.1 x001)
Erfassungsbereich 14.2.1 (14.2.1 x001)
Extrudieren 14.2.1
Farbpapier-Collage 14.2.1
Farbraster 14.2.1
Fasern 14.2.1
FastCore-Routinen 14.2.1 (14.2.1 x001)
Feuchtes Papier 14.2.1
Filtergalerie 14.2.1
Flash 3D 14.2.1 (14.2.1 x001)
Fluchtpunkt 14.2.1
Fotokopie 14.2.1
Fotos freistellen und gerade ausrichten (Filter) 14.2.1
Fotos freistellen und gerade ausrichten 14.2.1 (14.2.1 x001)
Fresko 14.2.1
Für Web speichern 14.2.1
Gekreuzte Malstriche 14.2.1
Gerissene Kanten 14.2.1
Glas 14.2.1
Google Earth 4 14.2.1 (14.2.1 x001)
Grobe Malerei 14.2.1
Grobes Pastell 14.2.1
HDRMergeUI 14.2.1
IFF-Format 14.2.1
JPEG 2000 14.2.1
Kacheleffekt 14.2.1
Kacheln 14.2.1
Kanten betonen 14.2.1
Kohleumsetzung 14.2.1
Konturen mit Tinte nachzeichnen 14.2.1
Körnung & Aufhellung 14.2.1
Körnung 14.2.1
Kräuseln 14.2.1
Kreide & Kohle 14.2.1
Kreuzschraffur 14.2.1
Kristallisieren 14.2.1
Kunststofffolie 14.2.1
Kurtosis 14.2.1 (14.2.1 x001)
Leuchtende Konturen 14.2.1
Malgrund 14.2.1
Malmesser 14.2.1
Matlab-Vorgang 14.2.1 (14.2.1 x001)
Maximum 14.2.1 (14.2.1 x001)
Median 14.2.1 (14.2.1 x001)
Mehrprozessorunterstützung 14.2.1 (14.2.1 x001)
Mezzotint 14.2.1
Minimum 14.2.1 (14.2.1 x001)
Mit Struktur versehen 14.2.1
Mit Wasserzeichen versehen 4.0
MMXCore-Routinen 14.2.1 (14.2.1 x001)
Neigung 14.2.1 (14.2.1 x001)
Neonschein 14.2.1
NTSC-Farben 14.2.1 (14.2.1 x001)
Objektivkorrektur 14.2.1
Objektivunschärfe 14.2.1
Ölfarbe 14.2.1
Ölfarbe getupft 14.2.1
OpenEXR 14.2.1
Ozeanwellen 14.2.1
Patchwork 14.2.1
PCX 14.2.1 (14.2.1 x001)
Pfade -> Illustrator 14.2.1
Photoshop 3D-Modul 14.2.1 (14.2.1 x001)
Photoshop Touch 14.0
Pixar 14.2.1 (14.2.1 x001)
PNG 14.2.1
Polarkoordinaten 14.2.1
Portable Bit Map 14.2.1 (14.2.1 x001)
Prägepapier 14.2.1
Punktieren 14.2.1
Punktierstich 14.2.1
Radialer Weichzeichner 14.2.1
Radiance 14.2.1 (14.2.1 x001)
Rasterungseffekt 14.2.1
Risse 14.2.1
Schwamm 14.2.1
Schwingungen 14.2.1
Selektiver Weichzeichner 14.2.1
Solarisation 14.2.1 (14.2.1 x001)
Spritzer 14.2.1
Standardabweichung 14.2.1 (14.2.1 x001)
Stempel 14.2.1
STL 14.2.1 (14.2.1 x001)
Strichumsetzung 14.2.1
Strudel 14.2.1
Stuck 14.2.1
Sumi-e 14.2.1
Summe 14.2.1 (14.2.1 x001)
Targa 14.2.1
Tontrennung & Kantenbetonung 14.2.1
Unterstützung für Skripten 14.2.1
Varianz 14.2.1 (14.2.1 x001)
Variationen 14.2.1 (14.2.1 x001)
Verbiegen 14.2.1
Verflüssigen 14.2.1
Versetzen 14.2.1
Verwackelte Striche 14.2.1
Verwacklung reduzieren 14.2.1
Wasserzeichen anzeigen 4.0
Wavefront|OBJ 14.2.1 (14.2.1 x001)
Weiches Licht 14.2.1
Wellen 14.2.1
WIA-Unterstützung 14.2.1 (14.2.1 x001)
Windeffekt 14.2.1
Wireless Bitmap 14.2.1 (14.2.1 x001)
Wölben 14.2.1
Wolken 14.2.1 (14.2.1 x001)
Optionale Zusatzmodule und Zusatzmodule von Drittanbietern: KEINE
Nicht geladene Plug-Ins: KEINE
Blitz:
Mini Bridge
Kuler
Adobe Exchange
Installierte TWAIN-Geräte: OHNETonakawa73 I would also recommend reviewing Install and update apps - https://helpx.adobe.com/creative-cloud/help/install-apps.html for more information on how to install the Creative applications included with your membership.
-
Performace problem in oracle DB after Solaris 10 recommended patching
Dear All,
We are facing some performance problems in our oracle DB qerries and reports recently. We applied Solaris 10 recommended patchset on april 24th. Our monthly report generation acitivity is now running for last 3 days i.e may 11,12,13 and application team is complaining of degraded performance. I can see that the cpu usage on the server is about 100% during the report generation , but everything else is fine. Free memory is 15gb , all mountpoints are under 80% and there is no other hardware issue as well.
Now app team is suspecting that last month patching could be the cause of the problem, because till last month they were performing this activity without having a performance degradtion.
I just need to be sure that solaris 10 recommended patches are not causing this issue. Any of you ever faced such an issue after patch installation ?
The server is V490 with 8 cpu / 32gb RAM. Any hints will be appreciated ?
Thanks in Advance.
Edited by: 1005720 on May 13, 2013 9:58 PMI did not archive the o/p of prstat but have o/p of top with me
195 processes: 175 sleeping, 13 running, 1 zombie, 6 on cpu
CPU states: 0.0% idle, 78.4% user, 21.6% kernel, 0.0% iowait, 0.0% swap
Memory: 32G phys mem, 16G free mem, 70G total swap, 70G free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
18964 oracle 39 41 0 6389M 5488M cpu/1 2:50 15.04% oracle
19638 oracle 26 59 0 6390M 5490M run 0:27 8.58% oracle
19648 oracle 36 31 0 6418M 5516M run 0:21 5.92% oracle
19023 oracle 34 30 0 6398M 5498M run 0:37 5.88% oracle
7229 oracle 38 30 0 6427M 5514M run 234:24 5.69% oracle
6912 oracle 45 31 0 6395M 5483M cpu/16 218:04 5.31% oracle
7216 oracle 28 31 0 6430M 5517M run 232:40 5.04% oracle
19644 oracle 35 40 0 6390M 5490M cpu/17 0:24 4.74% oracle
6897 oracle 112 30 0 6392M 5479M run 236:38 4.52% oracle
3813 oracle 106 31 0 6428M 5528M run 100:11 4.43% oracle
18970 oracle 234 31 0 6398M 5499M cpu/0 1:45 4.15% oracle
19749 oracle 1 32 0 6386M 5486M run 0:00 4.05% oracle
19646 oracle 23 31 0 6405M 5505M run 0:25 3.61% oracle
18972 oracle 11 32 0 6398M 5497M run 2:18 3.61% oracle
7220 oracle 258 32 0 6468M 5555M cpu/3 113:13 3.56% oracle
During the entire issue there was no %iowait. -
Performace problem on the views of original table
I have views built on original table. After versioning the original table, my view becomes the view on a view. I have experienced performance problem with the query again the view built on my original table.
Help!
My e-mail is [email protected]
AshleyVaibhav Tiwari wrote:
Hi Johannes,
>
> Seems the problem is with your IE. If you are using IE8 or 9, try to run the application in compatibility mode.
>
> To do that use the compability mode button given besides the address bar.
>
> Regards,
> Vaibhav
Why would you assume the problem is Internet Explorer. The poster stated that the problem occurs in both IE and Firefox: "the problems occurs in both, IE and Firefox"
I would think that something on the server side, as suggested is the more likely candidate. Is there anything the WDDOMODIFYVIEW at all related to the table? Are there any property bindings to the table that could effect the display? -
6330 LE3 - ata mode vs performace problems
note: see sig for specs.
alright, so here a little while back, my main hard drive (the 10gig)
has started to perform like it's in PIO mode, yet everything i run to test it says it's using ata(/dma)66 mode. my secondary is using ata100 and runs fine. the primary is getting 1.8mb/sec max secondary gets ~30mb/sec
both are hooked to the same _new_ cable (replaced it, without effect) on the primary ide channel. swapping to the secondary ide channel does nothing more than make win2k look for my cdrom drives.
hooking up a single drive yeilds normal performance.
i know this board is capable of running each drive at it's rated speed/tranfer rates because it was doing so not but a few weeks ago, yet it's refusing to do so now.
anyone got any insights into this little problem?
-Tsukiswapping the second drive onto ide2(without cdroms) gives me full performance off both drives
but i shouldn't need to stick my drives in some funky hdd/cd+hd/cd config just to get full performance :\ -
Graphics2D.drawRenderedImage performace problems
I am having a major problem with an image viewer that I am building in Java. When I dispaly the image it is extremely slow (~30sec for 1024x600 pixels). Has anyone run into this problem when using Garphiocs2d.drawRenderedImage. If so is there any way to work around it and speed it up?
Thanks for any help.
CristyNo, I am actually using Native libraries to open the image, I recieve a Planar Image from the library call to work with , then I create a raster with only the tiles I need then I create a Data Buffer and a writeable Raster from that. Then I create a Buffered Image , which is what I am using to call drawRenderedImage. I have exchnaged drawRenderedImage for just drawImage with the same results.
With trusty print statments I have determined that it is the single call to drawImage or drawRenderedImage that takes forever.
Thanks for any help.
Cristy -
Satellite C75 - Performace problems after recovery
After I recovered my Satellite C75 back to factory settings (that took severala hours) I suffer from drastic performance problems. Games that were running with 50 fps now are having about 10fps.
What might be the problem and what to do?Im quite sure. And dfference is huge. I thought that maybe thats something to do with updates, but nothing have changed.
-
In order to model a situation I need an object type to hold a table of objects of the enclosing type. In order to accomplish this I did the following (using 11.2.0.1 running on Windows 64-bit):
<li>declared a base type without the table of objects
<li>declared the table type to be a table of the base types
<li>declared the a sub-type of the base type that includes a member of the table type, as well as the functions and procedures I need.
This works fine. Because the table actually holds instances of the sub-type I have to use the treat function to cast table elements to the correct type, but this is not a problem.
The problem arises when I create constructor functions to allow me to not have to specify every data member when creating an instance of the sub-type. When the constructor includes a table instance as a parameter – which must be assigned to the member in the constructor – the performance degrades dramatically over calling the default constructor if the constructor is called multiple times.
I have distilled this down to a simple type with three members, two of which are declared in the base type, with the table member declared in the sub-type, as shown below.
h2. Type Definitions
-- base type --
create type base_t as object(
text varchar2(16),
val int
) not final;
-- table type (of base type) --
create type table_t as table of base_t;
-- sub type specification --
create type sub_t under base_t(
items table_t,
constructor function sub_t(
p_text in varchar2)
return self as result,
constructor function sub_t(
p_text in varchar2,
p_items in table_t)
return self as result,
member procedure output_text(
p_level in pls_integer default 0)
-- sub type body --
create type body sub_t as
constructor function sub_t(
p_text in varchar2)
return self as result
is
begin
self.text := p_text;
return;
end sub_t;
constructor function sub_t(
p_text in varchar2,
p_items in table_t)
return self as result
is
begin
self.text := p_text;
self.items := p_items;
return;
end sub_t;
member procedure output_text(
p_level in pls_integer default 0)
is
prefix varchar2(32767);
sub sub_t;
begin
for i in 1..p_level loop
prefix := prefix || ' ';
end loop;
dbms_output.put_line(prefix || self.text);
if self.items is not null then
for i in 1..self.items.count loop
sub := treat(items(i) as sub_t);
sub.output_text(p_level + 1);
end loop;
end if;
end;
end;I have also built a little testing routine that creates an object of the base_t type (with nested objects of the same type in the table member) in a loop, as shown below:
h2. Testing Routine
declare
type timestamps_t is table of timestamp;
ts1 timestamps_t := timestamps_t();
ts2 timestamps_t := timestamps_t();
obj sub_t;
dflt boolean := true;
times pls_integer := 10000;
begin
-- setup --
ts1.extend(times);
ts2.extend(times);
-- iterate --
for i in 1..times loop
ts1(i) := systimestamp;
if dflt then
-- call default constructor --
obj :=
sub_t('1', null, table_t(
sub_t('A'),
sub_t('B', null, table_t(
sub_t('i', null, table_t(sub_t('a'))),
sub_t('ii'))),
sub_t('C')));
else
-- call user-defined constructor --
obj :=
sub_t('1', table_t(
sub_t('A'),
sub_t('B', table_t(
sub_t('i', table_t(sub_t('a'))),
sub_t('ii'))),
sub_t('C')));
end if;
ts2(i) := systimestamp;
end loop;
-- output timing --
if times < 10 then
for i in 1..times loop
dbms_output.put_line('time '||i||' : '||(ts2(i) - ts1(i)));
end loop;
end if;
dbms_output.put_line('total : '||(ts2(times) - ts1(1)));
-- output data --
obj.output_text;
end;When I execute the test routine as shown (with dflt = true and times = 10000) I get the following output (281 milliseconds to construct 10000 objects using the default constructor, with the expected contents displayed correctly):
h2. Output using Default Constructor (10000 times)
total : +000000000 00:00:00.281000000
1
A
B
i
a
ii
C However, when I set dflt = false and set times = 7 I get the following output. The expected contents are still display correctly, but it takes over *4-1/2 seconds* to create just seven objects, but notice the geometric growth in time over each iteration. If I increase the number of iterations to 8, the routine eats the system and never returns!
h2. Output using User-Defined Constructor (7 times)
time 1 : +000000000 00:00:00.000000000
time 2 : +000000000 00:00:00.000000000
time 3 : +000000000 00:00:00.015000000
time 4 : +000000000 00:00:00.016000000
time 5 : +000000000 00:00:00.141000000
time 6 : +000000000 00:00:00.734000000
time 7 : +000000000 00:00:03.703000000
total : +000000000 00:00:04.609000000
1
A
B
i
a
ii
CWhat gives? How can I use a custom constructor and at least approach the performance of using the default constructor? I have run the hierarchical profiler, which indicates the time is being consumed in the 2-parameter constructor.You have prepared a wonderful reproducable case, which I would use in a service request for Oracle support.
You probably won't find many 'experts' on this forum that are experienced in the OO-features of PL/SQL. You might try the Objects forum: Objects . Allthough that forum does not seem to be very 'active'... -
Performace problem in a select statment how to imporve the performance
fist select statment
SELECT a~extno
a~guid_lclic " for next select
e~ctsim
e~ctsex
*revised spec 3rd
f~guid_pobj
f~amnt_flt
f~amcur
f~guid_mobj
e~srvll "pk
e~ctsty "PK
e~lgreg "PK
INTO TABLE gt_sagmeld
FROM /SAPSLL/LCLIC as a
INNER JOIN /sapsll/tlegsv as e on elgreg = algreg
* revised spec 3rd
inner join /sapsll/legcon as f on fguid_lclic = aguid_lclic " for ccngn1 selection
inner join /sapsll/corcts as g on gguid_pobj = fguid_pobj
where a~extno in s_extno.
sort gt_sagmeld by guid_lclic guid_pobj.
lgreg ctsty srvll
delete adjacent duplicates from gt_sagmeld comparing guid_lclic guid_pobj.
it selects about 20 lakh records
belos select statment whichs is taking time as it is based on the entreis of gt_sagmeld
select /sapsll/corpar~guid_mobj
/sapsll/corpar~PAFCT
but000~bpext
but000~partner
/sapsll/corpar~parno
into table gt_but001
from /sapsll/corpar
INNER join but000 on but000partner = /sapsll/corparparno
for all entries in gt_sagmeld
where /sapsll/corpar~guid_mobj = gt_sagmeld-guid_mobj
and /sapsll/corpar~PAFCT = 'SH'.
SELECT /sapsll/cuit~guid_cuit " PK
/sapsll/cuit~QUANT_FLT " to be displayed
/sapsll/cuit~QUAUM " to be displayed
/sapsll/cuit~RPTDT " to be displayed
/sapsll/cuhd~guid_cuhd " next select
/sapsll/cuit~guid_pr " next select
INTO table gt_sapsllcuit
FROM /sapsll/cuit
inner join /sapsll/cuhd on /sapsll/cuitguid_cuhd = /sapsll/cuhdguid_cuhd
FOR all entries in gt_sagmeld
WHERE /sapsll/cuit~guid_cuit = gt_sagmeld-guid_pobj.
Delete adjacent duplicates from gt_sapsllcuit[].
if not gt_sapsllcuit[] is initial.hi navenet
that didnt worked
we need to try ur range options
but not sure what you told in the last mail as not clear with range can u pls eloboragte more i am pasting the full code here
SELECT a~extno
a~guid_lclic " for next select but000
e~ctsim
e~ctsex
e~srvll
e~ctsty
e~lgreg
INTO TABLE gt_sagmeld
FROM /SAPSLL/LCLIC as a
INNER JOIN /sapsll/tlegsv as e on elgreg = algreg
where a~extno in s_extno.
sort gt_sagmeld by guid_lclic.
delete adjacent duplicates from gt_sagmeld comparing all fields.
IF not gt_sagmeld[] is initial.
SELECT /sapsll/legcon~guid_lclic
/sapsll/legcon~guid_pobj
/sapsll/legcon~amnt_flt
/sapsll/legcon~amcur
but000~bpext
*revised spec
/sapsll/corpar~PAFCT
/sapsll/legcon~guid_mobj
/sapsll/cuit~guid_cuit
INTO TABLE gt_but000
FROM /SAPSLL/LEGCON
for all entries in gt_sagmeld
where /SAPSLL/legcon~guid_lclic = gt_sagmeld-guid_lclic.
IF NOT GT_BUT000[] IS INITIAL.
sort gt_but000 by guid_mobj.
delete adjacent duplicates from gt_but000 comparing guid_mobj.
select /sapsll/corpar~guid_mobj
/sapsll/corpar~PAFCT
/sapsll/corpar~parno
into table gt_but001
from /sapsll/corpar
for all entries in gt_but000
where /sapsll/corpar~guid_mobj = gt_but000-guid_mobj.
and /sapsll/corpar~PAFCT = 'SH'.
DELETE gt_but001 where PAFCT <> 'SH'.
*sort gt_corpar by parno.
*delete adjacent duplicates from gt_corpar comparing parno.
*select gd000~partner
gd000~bpext
from gd000 into table gt_but001
for all entries in gt_corpar
where gd000~partner = gt_corpar-parno.
my ultimat aim is to select bpext from gd000
can u please explain how to use ranges here and what is the singnificance and how ill i read the data from the final table if we use ranges
regards
Nishant -
Performace problem - and ORDERED hint
Hello,
The following query is from oracle application EBS 11i , running on 10203 instance.
Its take about 28 hour before its finished.
Its exection plan is very long about 150 lines.
In order to keep it simple i pasted only the first 35 rows from the ORIGINAL plan.
Id number 34 bellow shows that the unique index :"HZ_PARTIES_U1" fetched each and every row from the table (1,751,000 rows).
| 34 | ===============> INDEX UNIQUE SCAN | HZ_PARTIES_U1 | 1751K
SELECT * FROM TABLE(dbms_xplan.display_awr('9r0jrz9ygpt9f'));
PLAN_TABLE_OUTPUT
SQL_ID 9r0jrz9ygpt9f
SELECT /*+ index(ap_bank_accounts) */ TRX_ID, ROW_ID, TRX_NUMBER, AMOUNT
FROM CE_222_TRANSACTIONS_V
WHERE BANK_ACCOUNT_ID = :B8 AND CURRENCY_CODE = :B7 AND (:B6 IN ('CREDIT','NSF','REJECTED') OR TRX_TYPE = 'MISC')
AND DECODE(:B6 , 'MISC_DEBIT', -1, 1) * AMOUNT BETWEEN :B2 * (1 - :B5 )
AND :B2 * (1 + :B5 )
AND TRX_DATE BETWEEN (:B1 - :B4 )
AND (:B1 + :B3 )
AND NOT EXISTS ( SELECT 'already matched trx' FROM IL_CE_REC WHERE TRX_ROWID = ROW_ID)
ORDER BY ABS(:B2 - AMOUNT), ABS(:B1 - TRX_DATE)
Plan hash value: 1193373660
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 23321 (100)| |
| 1 | SORT ORDER BY | | 3 | 228 | 23321 (16)| 00:01:02 |
| 2 | NESTED LOOPS ANTI | | 3 | 228 | 23320 (16)| 00:01:02 |
| 3 | VIEW | CE_222_TRANSACTIONS_V | 3 | 192 | 23320 (16)| 00:01:02 |
| 4 | UNION-ALL | | | | | |
| 5 | FILTER | | | | | |
| 6 | FILTER | | | | | |
| 7 | NESTED LOOPS | | 1 | 395 | 11668 (16)| 00:00:31 |
| 8 | NESTED LOOPS OUTER | | 1 | 359 | 11667 (16)| 00:00:31 |
| 9 | NESTED LOOPS OUTER | | 1 | 323 | 11666 (16)| 00:00:31 |
| 10 | FILTER | | | | | |
| 21 | HASH JOIN | | 1 | 150 | 11648 (16)| 00:00:31 |
| 22 | NESTED LOOPS | | 1 | 54 | 4 (0)| 00:00:01 |
| 23 | NESTED LOOPS | | 1 | 18 | 2 (0)| 00:00:01 |
| 24 | TABLE ACCESS BY INDEX ROWID| AP_BANK_ACCOUNTS_ALL | 1 | 13 | 2 (0)| 00:00:01 |
| 25 | INDEX UNIQUE SCAN | AP_BANK_ACCOUNTS_U1 | 1 | | 1 (0)| 00:00:01 |
| 26 | INDEX UNIQUE SCAN | AP_BANK_BRANCHES_U1 | 2022 | 10110 | 0 (0)| |
| 27 | INDEX RANGE SCAN | FND_LOOKUP_VALUES_U1 | 1 | 36 | 2 (0)| 00:00:01 |
| 28 | TABLE ACCESS BY INDEX ROWID | AR_CASH_RECEIPTS_ALL | 1 | 96 | 11643 (16)| 00:00:31 |
| 29 | INDEX RANGE SCAN | AR_CASH_RECEIPTS_N8 | 211K| | 882 (18)| 00:00:03 |
| 30 | VIEW PUSHED PREDICATE | RA_HCUSTOMERS | 1 | 15 | 4 (0)| 00:00:01 |
| 31 | NESTED LOOPS | | 1 | 24 | 4 (0)| 00:00:01 |
| 32 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCOUNTS | 1 | 18 | 3 (0)| 00:00:01 |
| 33 | INDEX UNIQUE SCAN | HZ_CUST_ACCOUNTS_U1 | 1 | | 2 (0)| 00:00:01 |
| 34 | ===============> INDEX UNIQUE SCAN | HZ_PARTIES_U1 | 1751K| 10M| 1 (0)| 00:00:01 |
| 35 | TABLE ACCESS BY INDEX ROWID | AR_RECEIPT_METHODS | 1 | 8 | 1 (0)| 00:00:01 | When i added an ORDERED hint inside the view the plan changed as followed:
| 30 | VIEW PUSHED PREDICATE | RA_HCUSTOMERS | 1 | 15 | 4 (0)| 00:00:01 |
| 31 | NESTED LOOPS | | 1 | 24 | 4 (0)| 00:00:01 |
| 32 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCOUNTS | 1 | 18 | 3 (0)| 00:00:01 |
|* 33 | INDEX UNIQUE SCAN | HZ_CUST_ACCOUNTS_U1 | 1 | | 2 (0)| 00:00:01 |
|* 34 | ==============> INDEX UNIQUE SCAN | HZ_PARTIES_U1 | 1 | 6 | 1 (0)| 00:00:01 |
|* 35 | INDEX UNIQUE SCAN | FND_LOOKUP_VALUES_U1 | 1 | 36 | 1 (0)| 00:00:01 |
You can see that line 34 is now return just 1 row and not 1751K rows.Oracle documentaion : http://download.oracle.com/docs/cd/A97630_01/server.920/a96533/hintsref.htm said:
You might want to use the ORDERED hint to specify a join order if you know something about the number of rows selected
from each table that the optimizer does not.
Since , i dont want to use the ORDERED hint , i would like to know what statistics information are missing to the optimizer ? What should i look for ?
I have an uptodate statiscts , and bellow are the resoults .
SQL> select table_name,last_analyzed,num_rows,blocks
2 from dba_tables
3 where table_name='HZ_PARTIES';
TABLE_NAME LAST_ANAL NUM_ROWS BLOCKS
HZ_PARTIES 06-JUN-10 1760190 65325
SQL>
SQL>
SQL> select count(*) from apps.HZ_PARTIES;
COUNT(*)
1771794
SQL> select index_name,last_analyzed,num_rows,clustering_factor,blevel,distinct_keys
2 from dba_indexes
3 where table_name='HZ_PARTIES'
4 and index_name like '%U1';
INDEX_NAME LAST_ANAL NUM_ROWS CLUSTERING_FACTOR BLEVEL DISTINCT_KEYS
HZ_PARTIES_U1 06-JUN-10 1758162 158739 2 1758162Thanks for your helpNothing I can see in your in your post relates to the optimizer and you, yourself, wrote:
Oracle documentaion : http://download.oracle.com/docs/cd/A97630_01/server.920/a96533/hintsref.htm said:
You might want to use the ORDERED hint
{code}
so clearly the Oracle documentation does say that.
Either post in the correct forum, and explain your reasoning, or I can not help you further. -
Performace problem with 10g forms
We have just migrated from 6i forms to 10g forms. The application takes a lot of time to load each form. In the status bar I can see the it is trying to open each gif file of the forms menu.
one question, if I have created a jar of all the gifs, where will the applet pick it up from. Right now I have a jar that is put in /forms/java as well as a virtual directory pointitn to a path on my hp-ux server which has all the gifs and it seems like the applet picks it up from the virtual directory path.
Navigating through a hierarchical tree is also much slower.
Have i missed out something?Hello,
REP-56107: Invalid environment id {0} for Job Type {1} in the command lineCause:
The envid specified for the job is not defined in the server configuration file.
Action:
Specify an envid that is already defined in the server configuration file or define a new envid in the server configuration file and restart the Server.
It seems that the parameter envid is used in the reports request and this environment is not defined in the reports server configuration file :
For more details :
http://download-uk.oracle.com/docs/cd/B14099_17/bi.1012/b14048/pbr_cla.htm#sthref2786
Regards -
Hi,
I'm using ESS and i've performance problems using one application: Per_Bank_ES application spends a lot of time in response. I've seen it spends a lot of time in j2ee engine. Other ESS webdynpro applications have no performace problems since they work fine.
Using sap.session.ssr.showInfo=true parameter we get:
Browser: 531; J2EE: 767094; Backend: 44688
What can i do since this application is a sap webdynpro application?
Does anyone has a similar problem?
ThanksHi,
It looks the problem backbend, check backbend application performance. (check transaction related to Per_Bank_ES ).
Regards
Ben -
Fast Refresh mview performance
Hi,
I'm actually facing with Fast Refresh Mview performace problems and i would like to know what is the possible improvments for the fast refresh procedure:
- base table is 1 500 000 000 rows partitions by day , subparttion by has (4)
- mlog partition by hash (4) with indexes on pk and snaptime
- mview partition by day subpartition by hash (4)
10 000 000 insertions / day in base table/mlog
What improvment or indexes can i add to improve fast refresh ?
Thanks for helpHi,
Which DB version are you using?
Did you have a look at the MV refresh via Partition Change Tracking (PCT)?
http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/advmv.htm#sthref575
If it's possible to use PCT, it would probably improve a lot the performance of the refresh of your MV.
Regards
Maurice -
Help! iTunes is making my iMac act like a Windows machine!
Here is the basic problem:
Some songs will not play properly in iTunes, pausing often and causing the spinning beach ball of death to freeze my iMac for 5 to 10 minutes at a time, sometimes longer. I am also having problems ripping those songs to a CD.
I am trying to decide if I should uninstall iTunes or simply back everything up and do a clean install of Snow Leopard and iTunes.
I’m also wondering if simply getting more RAM would solve my woes. I have 2GB, which worked well for years. I have 4GB coming in the mail.
I have a 2008 iMac. It has about 175GB of free space on its hard drive, about 144 GB in use. With iTunes playing a file and nothing of particular note open, it has just under 1GB of free system memory, or RAM. I have both the latest version of Snow Leopard and iTunes installed. All software is up to date. Except, of course, that I'm not running Lion.
I have a 45GB in music and 54GB in podcasts in my iTunes library.
I have corrected my hard drive’s permissions repeatedly but am unaware of any other maintenance I should be doing on my hard drive.
Here are some observations that may or may not be helpful:
Songs in the same folder are behaving differently, some consistently playing without a problem, others consistently spinning the beach ball. For example, five songs off an album will freeze up my iMac but the album’s other songs do not.
I can, however, duplicate the symptoms with otherwise dependable songs by having more applications open. Nothing major, say Safari, Finder and iTunes. What I don’t get is why some songs would play well and others not when only iTunes is open.
Having a handful of applications open now sometimes prompts the spinning beach ball of death. This never used to happen, hence the RAM coming in the mail.
When I check information on the malfunctioning music files, it shows they contain data, but I start getting the beach ball checking the file’s “sharing & permissions” status in "Get Info."
Interestingly, I recently had to stop backing up my podcasts in Time Machine because some of them were crashing it. My music files, however, have been backing up fine in Time Machine, so far.
Also, when I try to copy my music files onto an external drive or my desk top – in anticipation of reinstalling iTunes -- my iMac cites error 36 and stops the process.
My iMac’s performace problems started to occur before I finally updated to Snow Leopard a few months ago. Indeed, I thought updating would solve them.
I had been regularly updating iTunes, however, I recall reading somewhere that upgrading to Snow Leopard while having an up-to-date iTunes installed might cause problems.
There you have it. Any advice or insights would he most welcome.Well, I seem to have solved my own problem. Do I get 10 points, I wonder? It appears that my poor iMac was in desperate need of a masssive cleaning. Thanks to other posts here and elsewhere, I found out about Cocktail, a free download -- unless you chip in for the more advanced version. This is a powerful and easy to understand tool. I used it to perk my iMac up to a level I have not seen in a LONG time. The spinning beach ball of death must go play elsewhere. Among other things, I cleared out all my caches, eliminated a ton of foreign languages and did several repairs, including tracking down and killing a corrupted permission. It also taught me how to tap into my computer's activity logs and gave me other useful insights. GOOD stuff. I did other things not related to Cocktail to free up processing power -- such as turning off file- size view option in Finder -- but Cocktail is what solved this problem. As I type this, I am happily listening to songs that earlier would not play. What a relief. I am now especially looking forward to that new RAM. We'll be blazing fast then, by gum.
Maybe you are looking for
-
My parents were too lazy and didnt know how to create their own itunes account. so I was wondering if either 1)there was a way to stop their music from geting on my laptop and phone,or 2) a way to transfer my perchases to a new itunes account. If any
-
Urgent Help Required - Video Not playing on IE11 and Firefox
Hi, I need Urgent Help , I am trying to Play this Presentation on IE 11 and Firefox and it doesn't play the Video part which is FLV and f4v formats. I have been googeling it but couldn't find a solution so far ,Can anyone please help , its really urg
-
Error when trying to run an FSG Report via XML Publisher...
hello. Hope someone can help. This is our first look into the benefits of XML Publisher in order convince Manager to start using it. Thing is, we really need to get both FSG reports and Dunning Letters working. We've currently got XML Publisher 5.0.0
-
Re: Installing hp deskjet 3054 on mac osx 10.9.4
i have an officejet pro 8600. i upgraded my mac to 10.9.5 and now cannot scan (although I can print). I get a message that says the software cannot be located. I cannot download the software from hp download/support.
-
Hi everybody : We are working on the development of a distributed application. The architecture of the system is something like this: - A dispatcher that receives requests (in a string shape) via http post. This unit has to encapsulate this request i