Performances Problem in XI using RFC
Hi All,
I have some doubts about XI performance:
Does anybody knows if there is any
performance restrictions to do RFC calls to XI ?
What's the best performance Solution in XI ?
iDoc Adpater or RFC Adapter, File Adapter or RFC Adapter ?
Is it possible that something in Mapping
causes performance problem ?
Thanks
Hi
You need to create the Type H RFC in PI system pointing to ECC and reverse in ECC system pointing to PI.
Create the RFC and test. When you test connection it will show error because of empty message but this will work fine for the Proxy with Data.
Thanks
Gaurav
Similar Messages
-
Performance problem , 100 % swap used, but vmstat - sr = 0
Hi,
I have a performance problem on a server. It is sometimes very low during several hours.
context : v890, 32 Go RAM, 8 SPARC IV+, solaris 10 release 03/05, veritas volume manager, containers, several oracle databases, applications...
with iostat, swap partition : %b -> 100% !!!!
with vmstat, r-> 0, b -> 0, w ->29, free memory : 600 Mo , sr -> 0, idle : more than 50%,
uptime, load average : 6
vmstat -S : si -> 0 , so -> 0
vmstat -p : api -> 45126682863 ( probably a bug ) , apo -> 0 fpi -> 1895320681342 ( probably a bug ), fpo -> 0
It's difficult to me to find the problem. Is it paging activity ??? someone can tell me, what is the memory limit for paging activity start ?
If you thing I'm in the wrong way, thanks for all ideas :)
Julien
Edited by: Wylem on Feb 28, 2008 6:11 PMDoes seem a bit odd.
The 'w' column doesn't necessarily mean that anything bad is happening now, but it does mean that the system was severely memory limited at some point in the past at least.
Paging should occur when free memory drops below LOTSFREE. I don't remember if swapping happens at a particular point, but probably wouldn't happen above DESFREE. The page scanner should become active (non-zero 'sr' numbers) any time the memory is below LOTSFREE.
Since you have Solaris 10, you might want to grab the dtrace toolkit and see if some of the tools in there show you anything more useful (some of the I/O ones might break down the access further).
So it really doesn't look like you're swapping/paging out anything now, but you almost certainly did in the past. It could be that you're using an app that has paged out a lot of stuff do disk, so that the I/O you're seeing is it bringing stuff back now that RAM is available.
Darren -
"blued" process causing performance problems (98% CPU use)
I have a friend who's MacBook fan is spinning at hi RPM's almost constantly. I took at look at her Activity Monitor and there's a process running called "blued" that seems to be the source of the problem. It runs nearly constantly at 90%+ CPU useage, with frequent spikes into the 96-100% range, which is what I assume is causing the cooling fan to go spinning out of control. Does anyone know what this is? I've Googled it to no avail, and am hesitant to just kill it before I know what it is. Any assistance would be greatly appreciated.
Hi Evan,
I found this in the Apple Developer area: http://developer.apple.com/documentation/Darwin/Reference/ManPages/man8/blued.8. html
I would just kill that process from the Activity Monitor.
Also, Run the hardware tests from your Install DVD (hold down d at Boot) and see if it reports any SNS (Temperature Sensor) errors - if one has failed or the leads have become disconnected then the fans will run at maximums revs by default and should go in for service.
Carolyn -
Performance Problem with File Adapter using FTP Conection
Hi All,
I have a pool of 19 interfaces that send data from R/3 using RFC Adpater, and these interfaces generate 30 TXT files in a target Server. I'm using File Adapters as Receiver Comunication Channel. It's generating a serious perfomance problem. In File Adpater I'm using FTP Conection with Permanently Conection, Somebody knows if PERMANENTLY CONECTION is the cause of performances problem ?
These interfaces will run once a day with total of 600 messages.
We still using a Test Server with few messages.Hi Regis,
We also faced teh same porblem. Whats happening is that when the FTP session is initiated by the file adapter, then its getting done from teh XI server. Hence the memory of the server is also eaten up. Why dont you give a try by using 'per file transfer'.
If this folder to which you are connecting is within your XI server network then you can mount(or map) that drive to the XI server and use it with a NFS protocol of the file adapter and thereby increasing the performance.
Cheers
JK -
Performance problems using BPM
Hello all,
We have Request/Reply interface using the JMS adapter towards the SAP R/3 system.
So the interface has 3 steps :
1. Request from JMS adapter - Asynchronous
2. Request/Reply using RFC adapter towards the R/3 - sync
3. Reply to JMS adapter - Asynchronous
we are using the BPM to merge all those steps, but between step 1 to 2 it waits about 4 seconds (we can monitor the message at SM58 for 2-3 seconds).
Its a problem for us because we exceed the time for the Request/reply at the MQ (which is connected to the JMS adapter).
Does someone has any idea ?
Regards,
YakiHello again Michelle, Kjetil,
We looked into this and given your details provided
regarding the query executed and the fact that this
problem shows up over thin network connections, we
believe you may be hitting one/both of Bug 2658196
and Bug 1844184.
To summarise, regardless of whether you use version
control or not, Designer must check version details
for each repository object acted on.
On a local machine or LAN, this does not present a
problem, but on a WAN, this does, due the amount of
(albeit individually small) packets of information
sent backwards and forwards between the client and
the repository.
Unfortunately, Oracle Designer was never designed to
run across a WAN. To attempt to get the performance
required would involve a re-design and re-write of a
large part of the Design Editor. I'm afraid this is
not really a practical option.
Other customers have also run into this problem. They
have found that using a third party product, such as,
Tarantella solves the problem. Products like Tarantella
enable clients to use central installations of Designer
with better results than using a locally installed copy.
Please see Metalink (http://meatlink.oracle.com) for
full details on the bugs themselves.
Hope this helps.
Regards,
Dominic
Designer Product Management
Oracle Cor -
Hi,
This is similar - yet different - to a few of the old postings about performance
problems with using jdbc drivers against Sql Server 7 & 2000.
Here's the situation:
I am running a standalone java application on a Solaris box using BEA's jdbc driver
to connect to a Sql Server database on another network. The application retrieves
data from the database through joins on several tables for approximately 40,000
unique ids. It then processes all of this data and produces a file. We tuned
the app so that the execution time for a single run through the application was
24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
Sql Server 2000 version. I ran the app and got an alarming execution time of
5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
and set the "useVarChars" property to "true" on the driver. The execution time
for a single run through the application is now 56 minutes.
56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
over twice the execution time that I was seeing against the 6.5 database. Theoretically,
I should be able to switch out my jdbc driver and the DBMS conversion should be
invisible to my application. That would also mean that I should be seeing the
same execution times with both versions of the DBMS. Has anybody else seen a
simlar situation? Are there any other settings or fixes that I can put into place
to get my performance back down to what I was seeing with 6.5? I would rather
not have to go through and perform another round of performance tuning after having
already done this when the app was originally built.
thanks,
mikeMike wrote:
Joe,
This was actually my next step. I replaced the BEA driver with
the MS driver and let it run through with out making any
configuration changes, just to see what happened. I got an
execution time of about 7 1/2 hrs (which was shocking). So,
(comparing apples to apples) while leaving the default unicode
property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
I then set the 'SendStringParametersAsUnicode' to 'false' on the
MS driver and ran another test. This time the application
executed in just over 24 minutes. The actual runtime was 24 min
16 sec, which is still ever so slightly above the actual runtime
against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
56 minutes that BEA's driver was giving me.
I think that this is very interesting. I checked to make sure that
there were no outside factors that may have been influencing the
runtimes in either case, and there were none. Just to make sure,
I ran each driver again and got the same results. It sounds like
there are no known issues regarding this?
We have people looking into things on the DBMS side and I'm still
looking into things on my end, but so far none of us have found
anything. We'd like to continue using BEA's driver for the
support and the fact that we use Weblogic Server for all of our
online applications, but this new data might mean that I have to
switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
between the client and DBMS, you will probably not see any appreciable
difference in the content of the SQL sent be either driver. My suspicion is
that it involves the historical backward compatibility built in to the DBMS.
It must still handle several iterations of older applications, speaking obsolete
versions of the DBMS protocol, and expecting different DBMS behavior!
Our driver presents itself as a SQL7-level application, and may well be treated
differently than a newer one. This may include different query processing.
Because our driver is deprecated, it is unlikely that it will be changed in
future. We will certainly support you using the MS driver, and if you look
in the MS JDBC newsgroup, you'll see more answers from BEA folks than
from MS people!
Joe
>
>
Mike
The next test you should do, to isolate the issue, is to try another
JDBC driver.
MS provides a type-4 driver now, for free. If it is significantly faster,
it would be
interesting. However, it would still not isolate the problem, because
we still would
need to know what query plan is created by the DBMS, and why.
Joe Weinstein at BEA
PS: I can only tell you that our driver has not changed in it's semantic
function.
It essentially send SQL to the DBMS. It doesn't alter it. -
Performance problems when using Premiere Elements for photo slideshows
Hello,
I had been using Premiere Elements 9 (PE9) to make a simple slideshow for my parents from their vacation trip and I ran into some serious performance problems. I had used it to create similar projects before, but not nearly as big. This one is like 260 photos, so basically it is 260 seperate clips. I have a POWERHOUSE workstation (see below) so it isn't my PC. Even when PE9 crashes, looking at my performance monitor my CPU and RAM aren't even halfway being utilized. I finally switched to Windows Movie Maker of all things and it worked seemlessly, amazing really. I'm wondering if I was just using PE9 for something other than what it was designed for since there weren't really any video clips, just a ton of photos that I made into video clips, if that makes sense. Based upon my experience with this so far, I can't imagine using PE9 anymore for anything really. I might have the need for a more professional video editing program in the near future, although it does seem like PE has a lot of features. How can I make sure it utilizes my workstation to its full potential? Here are my specs:
PC
Intel Core i7-2600K 4.6 GHz Overclocked
ASUS P8P67 Deluxe Motherboard
AMD Firepro V8800 Video Card
Crucial 128 GB SATA 6Gb/s Solid State Drive (Operating System)
Corsair Vengeance 16GB (4x4GB) Memory
Corsair H60 Liquid CPU Cooler
Corsair Professional Series Gold AX850 Power Supply
Graphite Series 600T Mid-Tower Case
Western Digital Caviar Black 1 TB SATA III Hard Drive
Western Digital Caviar Black 2 TB SATA III Hard Drive
Western Digital Green 3 TB SATA III Hard Drive
Logitech Wireless Gaming Mouse G700
I don’t play any games but it’s a great productivity mouse with 13 customizable buttons
Wacom Intuos5 Pen Tablet
Yes, this system is blazingly fast. I have yet to feel it slow down, even with Photoshop, Lightroom, InDesign, Illustrator and numerous other apps running at the same time. HOWEVER, Premiere Elements 9 has crashed NUMERUOS times, every time my system wasn't even close to being fully taxed.
Monitors – All run on the ATI V8800
Dell Ultra Sharp 30 inch
Samsung 27 Inch
HAANS-G 28 Inch
Herman Miller Embody Ergonomic Chair (one of my favorite items)Andy,
There ARE some differences between PrE and PrPro w/ an approved CUDA-capable and MPE hardware acceleration-enabled nVidia video card, but those differences show up ONLY in the quality of the Scaling. The processing overhead is almost exactly the same, when it comes to handling the extra pixels.
As of PrPro CS 5, two things changed:
The max. size of Still Images went up from 4096 x 4096 pixels, to quite a bit larger (cannot recall the numbers now).
The Scaling algorithms have been improved, though ONLY with the correct nVidia cards, with MPE hardware support enabled.
Now, there CAN be another consideration, between the two programs, in that PrPro CS 5 - CS 6, are 64-bit ONLY, so one benefits from the computer and OS to run it. PrE can be either 32-bit, or 64-bit, so one might, or might not, be taking advantage of the 64-bit program and OS. Still, the processing overhead will be almost identical, it's just that the 64-bit OS can spread it around a bit.
I still recommend Scaling the large Still Images in PS, prior to Import, to keep that processing overhead as low as is possible. Scaled Still Images work just fine, and I have one Project with 3000+ Scaled Still Images, that edits just fine in PrPro, even on my older 32-bit workstation. Testing that same machine, and PrPro some years ago, I could ONLY work with up to 5 - 4096 x 4096 Stills, before things ground to a crawl.
Now, Adobe AfterEffects handles large Still Images differently, so I just moved that test Project to AE, and added another 20 large Images, which edited just fine. IIRC, AE can handle Still Images up to 10K x 10K pixels, and that might have gone up, as of CS 5.
Good luck, and hope that helps,
Hunt -
Performance problem in RFC to JDBC interface
Hello everybody!
i'm working whit SAP PI 7.1
We defined some interfaces RFC - PI - JDBC (SQL server) but we have some performance problem.
If we have many row to write on the table then interface finish in timeout :
Synchronous timeout exceeded.
Returning to application. Exception: com.sap.engine.interfaces.messaging.api.exception.MessageExpiredException: Message 1d1f00b0-fecf-11de-8738-0015600446f0(OUTBOUND) expired.
I read the PI tuning document and i tried to apply configuration whit Advanced Adapter Engine but whitout result.
Now we want change the timeout in visual admin and maybe we solve the error but i'm asking myself....:
It's normal that for write 1500 row in a table we need more than 4 minuts????
It's possible accelerate this process??? After go live we will write messages whit more than 50.000 row.
somebody may help me?
PS: please no link to tuning guide or to notes (to increase the timeout parameter).This could be because your Database system (JDBC server) is taking more time to insert. The problem is not on PI side but on the receiving system side. Try inserting the same number od rows on the database server itself and check for the time taken for execution. Adding indexes on your database table solves the issue lot of times.
Here PI is not the culprit but definitely the receiver system.
VJ -
Performance problem in Mapping Designer using UDF with external imports
Hello,
we do have a big performance problem in developing (not in execution) graphical Mappings as far as we use "user defined functions" (UDF) with include-entries referencing to jar files which are imported as "imported archives".
For example the execution of invice mapping with a little bit bigger test file in the Mapping designer:
- after opening, not in change mod: 6 seconds
- after switching to change mod: 37 seconds (that's clear, now everything is compiled first)
- after adding "com.seeburger.functions.permstore.CounterFactory;" into the "import" field of one UDF, no other change: 227 seconds
- after saving and submiting the changlist (no longer in change mode): 6 seconds
- after switching to change mode: 227 seconds
So execution speed of testing (and also when watching queues) only increases in changemod more then three minutes when using UDF with imports, referencing to external JAR files. It doesn't depend on Seeburger functions (we are using XI also for EDIFACT, so we also use some Seeburger functions), I can reproduce it with any other JAR file which is used from a UDF.
Using java included functions like "java.text.NumberFormat;" in "Import" doesn't slow down the testing.
Can anybody reproduce this? We are using XI 3.0 SP19 on a AIX machine, so we also have to use the Java version from IBM.
cu
ManfredProblem was fixed by a upgrad of the JDK.
-
Performance problem using OBJECT tag
I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
Does anyone have any idea?
thanks in advance.
dennis.I have a performance problem using the java plugin and was wondering if anyone else was has seen the same thing. I have a rather complex applet that interacts with java script in a web page using the LiveConnect API. The applet both calls javascript in the page and is called by java script.
Im using IE6 with the java plugin that ships with the 1.4.2_06 JVM. I have noticed that if I deploy the applet using the OBJECT tags, the application seems the trash everytime I call a java method on the applet from javascript. When I deplot the same applet using the APPLET tag the perfomance is much better. I would like to use the OBJECT tag because it applet bahaves better and I have more control over the caching.
This problem seems to be on the boundaries of IE6, JScript, the JVM and my Applet (and I suppose any could be the real culprit). My application is IE5+ specific so I can not test the applet in isolation from the surround HTML/JavaScript (for example in another browser).
Does anyone have any idea?
thanks in advance.
dennis. -
Performance Problem between Oracle 9i to Oracle 10g using Crystal XI
We have a Crystal XI Report using ODBC Drivers, 14 tables, and one sub report. If we execute the report on an Oracle 9i database the report will complete in about 12 seconds. If we execute the report on an Oracle 10g database the report will complete in about 35 seconds.
Our technical Setup:
Application server: Windows Server 2003, Running Crystal XI SP2 Runtime dlls with Oracle Client 10.01.00.02, .Net Framework 1.1, C# for Crystal Integration, Unmanaged C++ for app server environment calling into C# through a dynamically loaded mixed-mode C++ DLL.
Database server is Oracle 10g
What we have concluded:
Reducing the number of tables to 1 will reduce the execution time of the report from 180s to 13s. With 1 table and the sub report we would get 30 seconds
We have done some database tracing and see that Crystal Reports Issues the following query when verifying the database and it takes longer in 10g vs 9i.
We have done some profiling in the application code. When we retarget the first table to the target database, it takes 20-30 times longer in 10g than in 9i. Retargeting the other tables takes about twice as long. The export to a PDF file takes about 4-5 times as long in 10g as in 9i.
Oracle 10g no longer supports the /*+ RULE */ hint.
Verify DB Query:
select /*+ RULE */ *
from
(select /*+ RULE */ null table_qualifier, o1.owner table_owner,
o1.object_name table_name, decode(o1.owner,'SYS', decode(o1.object_type,
'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW', o1.object_type), 'SYSTEM',
decode(o1.object_type,'TABLE','SYSTEM TABLE','VIEW', 'SYSTEM VIEW',
o1.object_type), o1.object_type) table_type, null remarks from all_objects
o1 where o1.object_type in ('TABLE', 'VIEW') union select /*+ RULE */ null
table_qualifier, s.owner table_owner, s.synonym_name table_name, 'SYNONYM'
table_type, null remarks from all_objects o3, all_synonyms s where
o3.object_type in ('TABLE','VIEW') and s.table_owner= o3.owner and
s.table_name = o3.object_name union select /*+ RULE */ null table_qualifier,
s1.owner table_owner, s1.synonym_name table_name, 'SYNONYM' table_type,
null remarks from all_synonyms s1 where s1.db_link is not null ) tables
WHERE 1=1 AND TABLE_NAME='QCTRL_VESSEL' AND table_owner='QLM' ORDER BY 4,2,
3
SQL From Main Report:
SELECT "QCODE_PRODUCT"."PROD_DESCR", "QCTRL_CONTACT"."CONTACT_FIRST_NM", "QCTRL_CONTACT"."CONTACT_LAST_NM", "QCTRL_MEAS_PT"."MP_NM", "QCTRL_ORG"."ORG_NM", "QCTRL_TKT"."SYS_TKT_NO", "QCTRL_TRK_BOL"."START_DT", "QCTRL_TRK_BOL"."END_DT", "QCTRL_TRK_BOL"."DESTINATION", "QCTRL_TRK_BOL"."LOAD_TEMP", "QCTRL_TRK_BOL"."LOAD_PCT", "QCTRL_TRK_BOL"."WEIGHT_OUT", "QCTRL_TRK_BOL"."WEIGHT_IN", "QCTRL_TRK_BOL"."WEIGHT_OUT_UOM_CD", "QCTRL_TRK_BOL"."WEIGHT_IN_UOM_CD", "QCTRL_TRK_BOL"."VAPOR_PRES", "QCTRL_TRK_BOL"."SPECIFIC_GRAV", "QCTRL_TRK_BOL"."PMO_NO", "QCTRL_TRK_BOL"."ODORIZED_VOL", "QARCH_SEC_USER"."SEC_USER_NM", "QCTRL_TKT"."DEM_CTR_NO", "QCTRL_BA_ENTITY"."BA_NM1", "QCTRL_BA_ENTITY_VW"."BA_NM1", "QCTRL_BA_ENTITY"."BA_ID", "QCTRL_TRK_BOL"."VOLUME", "QCTRL_TRK_BOL"."UOM_CD", "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD", "QXREF_BOL_PROD"."BOL_DESCR", "QCTRL_TKT"."VOL", "QCTRL_TKT"."UOM_CD", "QCTRL_PMO"."LINE_UP_BEFORE", "QCTRL_PMO"."LINE_UP_AFTER", "QCODE_UOM"."UOM_DESCR", "QCTRL_ORG_VW"."ORG_NM"
FROM (((((((((((("QLM"."QCTRL_TRK_BOL" "QCTRL_TRK_BOL" INNER JOIN "QLM"."QCTRL_PMO" "QCTRL_PMO" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_PMO"."PMO_NO") INNER JOIN "QLM"."QCTRL_MEAS_PT" "QCTRL_MEAS_PT" ON "QCTRL_TRK_BOL"."SUP_MP_ID"="QCTRL_MEAS_PT"."MP_ID") INNER JOIN "QLM"."QCTRL_TKT" "QCTRL_TKT" ON "QCTRL_TRK_BOL"."PMO_NO"="QCTRL_TKT"."PMO_NO") INNER JOIN "QLM"."QCTRL_CONTACT" "QCTRL_CONTACT" ON "QCTRL_TRK_BOL"."DRIVER_CONTACT_ID"="QCTRL_CONTACT"."CONTACT_ID") INNER JOIN "QFC_QLM"."QARCH_SEC_USER" "QARCH_SEC_USER" ON "QCTRL_TRK_BOL"."USER_ID"="QARCH_SEC_USER"."SEC_USER_ID") LEFT OUTER JOIN "QLM"."QCODE_UOM" "QCODE_UOM" ON "QCTRL_TRK_BOL"."ODORIZED_VOL_UOM_CD"="QCODE_UOM"."UOM_CD") INNER JOIN "QLM"."QCTRL_ORG_VW" "QCTRL_ORG_VW" ON "QCTRL_MEAS_PT"."ORG_ID"="QCTRL_ORG_VW"."ORG_ID") INNER JOIN "QLM"."QCTRL_BA_ENTITY" "QCTRL_BA_ENTITY" ON "QCTRL_TKT"."DEM_BA_ID"="QCTRL_BA_ENTITY"."BA_ID") INNER JOIN "QLM"."QCTRL_CTR_HDR" "QCTRL_CTR_HDR" ON "QCTRL_PMO"."DEM_CTR_NO"="QCTRL_CTR_HDR"."CTR_NO") INNER JOIN "QLM"."QCODE_PRODUCT" "QCODE_PRODUCT" ON "QCTRL_PMO"."PROD_CD"="QCODE_PRODUCT"."PROD_CD") INNER JOIN "QLM"."QCTRL_BA_ENTITY_VW" "QCTRL_BA_ENTITY_VW" ON "QCTRL_PMO"."VESSEL_BA_ID"="QCTRL_BA_ENTITY_VW"."BA_ID") LEFT OUTER JOIN "QLM"."QXREF_BOL_PROD" "QXREF_BOL_PROD" ON "QCTRL_PMO"."PROD_CD"="QXREF_BOL_PROD"."PURITY_PROD_CD") INNER JOIN "QLM"."QCTRL_ORG" "QCTRL_ORG" ON "QCTRL_CTR_HDR"."BUSINESS_UNIT_ORG_ID"="QCTRL_ORG"."ORG_ID"
WHERE "QCTRL_TRK_BOL"."PMO_NO"=12345 AND "QXREF_BOL_PROD"."MOVEMENT_TYPE_CD"='TRK'
SQL From Sub Report:
SELECT "QXREF_BOL_VESSEL"."PMO_NO", "QXREF_BOL_VESSEL"."VESSEL_NO"
FROM "QLM"."QXREF_BOL_VESSEL" "QXREF_BOL_VESSEL"
WHERE "QXREF_BOL_VESSEL"."PMO_NO"=12345
Does anyone have any suggestions on how we can improve the report performance with 10g?Hi Eric,
Thanks for your response. The optimizer mode in our 9i database is CHOOSE. We changed the optimizer mode from ALL_ROWS to CHOOSE in 10g but it didn't make a difference.
While researching Metalink I came across a couple of documents that indicated performance problems and issues with using certain data-dictionary views in 10g. Apparently, the definition of ALL_OBJECTS, ALL_ARGUMENTS and ALL_SYNONYMS have changed in 10g, resulting in degradation in performance, if quieried against these views. These are the same queries that crystal reports is queriying. We'll try the workaround suggested in these documents and see if it resolves the issue.
Here are the Doc Ids, if you are interested:
Note 377037.1
Note:364822.1
Thanks again for your response.
Venu Boddu. -
Performance problem when using CAPS LOCK piano input
Dear reader,
I'm very new to Logic and am running into a performance problem when using the CAPS LOCK-piano-keyboard for input of an instrument: when I'm not recording everything is fine and the program instantly responds on my keystrokes, but as soon as I go into record-mode there is sometimes a delay in the response-time (so I press a key and it takes up to half a second longer before the note is actually played).
Is there anything to do about this to improve performance (for example turning of certain features of the application), or should I never use the CAPS LOCK keyboard anyway and go straight for an external MIDI-keyboard?
Thanks and regards,
Tim MetzDoes your project have Audio tracks and just how heavy it is, how many tracks? Also, what kind of Software Instrument do you use?
-
I want to perform bidirectional i/o using a pci-6534.The problem is that i need to use the same lines for i/p and o/p. is that possible? And if it is how an i doing this?
Hi I Pant,
The idea that crossed my mind was to tie each of the lines coming from/going to "your device" to two of the DIO lines. Configure one these two ports as input, the other as output.
When you want to read the state of the line, read from the line configured as input. Similarly, use the other port when it is time to write.
Provided "your device" uses tri-state drivers/receivers, or can be used in a "wired-OR" configuration, this may work.
What is "your device"?
Ben
Ben Rayner
I am currently active on.. MainStream Preppers
Rayner's Ridge is under construction -
Forms performance problem on the web, using webutil.
When starting the webutil-demoform on the Application Server,
webutils eight javabeans is loaded in 1 second.
I'm using &WebUtilLogging=Console&WebUtilLoggingDetail=Detailed for logging this.
When starting the same form from client the beans is loaded in ~30-40seconds.
Any suggestions to figure out why?Problem solved!
Don't use the IP-address in the URL, use hostname or add an entry in your hostfile on the client. http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5092063
Metalink
Note:402180.1 Initial Loading of Webutil Forms Are Slow
Note 356190.1 Performance Problems in Forms with Webutil 1.0.6 for Intranet Web Clients -
Problem in Fetching data using RFC FM from NON SAP system
Hi All,
Need help from experts on data transfer using RFC from a non SAP system/database. I have creasted the destination in sm59.
I have created a function module too which has an import parameter as a query and export parameter as an internal table.
No when i test run this function module it returns some entries. But when I call this FM in a program it throws a dump 'rfc_external_abort' . Here is what the call to FM looks like in my program. Please guide me on this.
Thanks in advance,
Saket.
DATA : lv_query TYPE string,
it_gddbdata TYPE ZC9_TAB_SOLMAN_XI_RFC.
lv_query = 'SELECT * FROM GDDB.VW_GDDB_PERSONS WHERE LASTNAME = ''''A'''''.
CALL FUNCTION 'Z_C9_SOLMAN_XI_GDDBCON'
DESTINATION 'D39'
EXPORTING
in_sql_query = lv_query
IMPORTING
ET_GDDB_DATA = it_gddbdataHi,
your question..
DATA : lv_query TYPE string,
it_gddbdata TYPE ZC9_TAB_SOLMAN_XI_RFC.
lv_query = 'SELECT * FROM GDDB.VW_GDDB_PERSONS WHERE LASTNAME = ''''A'''''.
CALL FUNCTION 'Z_C9_SOLMAN_XI_GDDBCON'
DESTINATION 'D39'
EXPORTING
in_sql_query = lv_query
IMPORTING
ET_GDDB_DATA = it_gddbdata
you are telling that you are fetching the data from non sap system using RFC function module.. how it is possible..you canot fetch the data from non SAP system using RFC FM alone.. you have to use the concept BAPI.. it will work..
I hope this will help you..
Regards,
Kiran
Maybe you are looking for
-
How can I display the range for LastFullMonth in the header of a report
How can I display the month for LastFullMonth in the header of a report run in the past so that a report that ran sept 1 2009 selecting data for LastFullMonth (august 2009) displays sept 2009 in the header even if there is no data selected by the re
-
Multiple iPhones on my account
So I have three iPhones (4') on my iTunes account. I'm the parent so I pay - LOL. Over many years myself, and my two kids have bought a massive amount of music, TV shows and movies and Apps - I mean a lot. They each have their own MacBooks that befor
-
HP PhotoSmart Essential error on startup
Hello, Up until the other day I was not having any issues when I would start my computer. A few days ago when I started the computer I got a pop-up box titled "HPPhotosmartEssential" and it stated "Please wait while Windows configures HPPhotosmart E
-
How do I create separate playlists for two ipods/one library and them sync them?
I have an older classic ipod, my daughter has an older nano. I got new laptop and have managed to sync my ipod and ipad with success, as well as move my library from old pc to new laptop. Now I am trying to update and sync my daughter's ipod. It s
-
Problem In receiving IDOCs in XI
Hi, We are sending IDOCS from R/3 to XI. We have all the necessary RFC connection and PORTS defined in both R/3 and XI. The metadata is also loaded in to XI. Till last week we were able to POST IDOCs from R/3 to XI. But today IDOCS are not geeting in