JSF inputtxt in database cost too much time
Hi Guys,
My project is using JSF as web development. Now on a page, I need put 200 records in one page, all these records with about 15 fields. As we need to let the fields editable, so I use inputbox in the datatable. The problem is, when I submit the form, it takes me long long time. Is there anybody has the solutions? Thanks in advance!
Jason
Hi Gimbal2,
Thanks for your reply. The details are here.
I am using JSF1.1 with MyFaces implements. The datatable contains 200 rows * 15 input box, when I sumbit the form to backend, it takes more than 95% time cost on apply-request stage. During this sumbitting, no database connection, no other remote connection.
The time spent in local enviroment is about 1-2 minutes, but the time in my client enviroment will cost more and timeout. So the performance is not aceptable.
I've looked into this forum, someone mentioned the same issue (input box in datatable cause performance issue). It seems nobody come out the solutions yet. I agree on that using pagination can work, but client won't using that feature, because only maximum 200 records.
Edited by: Jason.Ye on Apr 3, 2009 12:54 AM
Similar Messages
-
Taking too much time (1min) to connect to database
Hi,
I have oracle 10.2 and 10g application server.
Its taking too much time to connect to database through application (on browser). The connection through sqlplus is fine.
Please share your experience.
Regards,
NaseerDear AnaTech,
i am going to ask not related this question which already you answered i am going to ask you about that how to connect forms6i and Developer 10g with OracleAS.
i have installed and working Developer Suite 10g Ver. 10.1.2 and also Form Builder 6i. On my other machine i installed and working Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 and also on the database machine i installed Oracle Enterprise Manager 10g Application Server Control 10.1.2.0.2.
my database conectivity with Developer suite Forms and Reports and also Form6i and Reports6i are working fine. no problem.
now the 1 question of mine is that when i try to run form6i through run from web i got this error. FRM-99999: error 18121 occured see the release not.
this and the main question of mine is this that how can i control my OracleAS 10g with forms because basically the functionality of OracleAS is Mid-Tier but i am not utilizing the Mid-tier i am using here Two-tier Envrionment even i installed 3-Tier Environment so tell me how i utilize it with 3-Tier..
I hope you don't mind that i ask this question here and also if you give me you email then we can discuss this in detail and i can be helpful of your great expertise. i also know and utilize my 3-tier real envrionment.
Waiting for your great response.
Regards,
K.J.J.C -
Oracle 11G + Database Connection taking too much Time
Hi ,
I am using Oracle 11G . When i try to connect to oracle database
sqlplus system/impetus@database
it takes too much time ( more than 1 min )
I am not getting why this is happening ?
Before some day it was working fine but suddenly the problem occurred.. Is this issue is related to log files generated by oracle or memory related issue ( sga and pga)..
Any Help please ?Darn! /tmp/capture.log does not contain what I expected it to contain.
So a different approach is needed.
issue the following command:
script /tmp/capture.log
#now you are in a new shell & issue the next 3 lines
strace sqlplus system/impetus@database
exit
Ctrl-d (Control D)
now issue the following commands.
ls -l /tmp/capture.log
wc -l /tmp/capture.log
Post the results from 2 commands above back here -
SAP GUI taking too much time to open transactions
Hi guys,
i have done system copy from Production to Quality server completely.
after that i started SAP on Quality server, it is taking too much time for opening SAP transactions (it is going in compilation mode).
i started SGEN, but it is giving time_out errors. please help me on this issue.
MY hardware details on quality server
operating system : SuSE Linux 10 SP2
Database : 10.2.0.2
SAP : ECC 6.0 SR2
RAM size : 8 GB
Hard disk space : 500 GB
swap space : 16 GB.
regards
RameshHi,
>i started SGEN, but it is giving time_out errors. please help me on this issue.
You are supposed to run SGEN as a batch job and so, it should be possible to get time out errors.
I've seen Full SGEN lasting from 3 hours on high end systems up to 8 full days on PC hardware...
Regards,
Olivier -
BPC application is taking too much time to load
Hi experts!
I'm facing a very weird problem...
We've developed a BPC application (app name: USM).
This application is taking too much time to be loaded in some computers (around 8 minutes to load). Yes, in SOME computers.
There is around 100.000 registers in the database and most coming from material master data.
If I try to load this USM application in another computer, the process loads smoothly. The computer's hardware is all the same, the server is hyper estimated and everyone is in the same network.
I talked to infrastructure departament and we made several tests. We run BPC on the server (loaded quickly), on several computers (some loads quickly, others don't), used wireless and cable connection (got all the same result) and checked communication between BW and BPC but it is ok.
After all, I tried to load APSHEL application in the same enviroment and it loaded intantly. So, I guess it is something wrong with my application. But if was this, I suppose it should happen to all computers and not only with part of them.
Have anybody ever seen something like this?
Thank you in advance.
Rubens
Edited by: Rubens Massayuki Kumori on May 12, 2011 8:43 PM
Edited by: Rubens Massayuki Kumori on May 12, 2011 8:46 PMHi Rubens,
I would try making a couple of test:
1. I will install the client in a machine that is located in the same network segment, or try using a vpn that comunicates with the server bypassing all security devices, only to see if the network it's the problem.
2. Making a full optimize of one application to see if maybe the problem it's related to the segmentation of the cubes (i don't think that this is the problem but give it a try).
It is very wierd that in some computers happends and in others don't... also try to clean up the local cache of the applications in those computers that are giving to you bad performce and retry.
hope it helps, -
[Solved] Dolphin taking too much time to respond
Dolphin, while used by an ordinary user, takes too much time to respond, especially when firefox is running. The time it takes to initialize is also very lengthy. At times, it does not even open up and even if active, it hangs for quite sometime before being responsive again while trying to select a file or two.
At the same time, it works rather faster when called as super user through command line using $sudo dolphin
But then, the console displays lot of error messages as follows.
sebinaj ~
$ sudo dolphin
Error: "/var/tmp/kdecache-sebinaj1Jz46I" is owned by uid 1002 instead of uid 0.
Error: "/tmp/kde-sebinaj" is owned by uid 1002 instead of uid 0.
sebinaj ~
$ Error: "/tmp/ksocket-sebinaj" is owned by uid 1002 instead of uid 0.
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
"/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
Error: alias title requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#title, http://www.semanticdesktop.org/ontologies/2007/03/22/nco#title
Error: alias comment requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#comment, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#comment
Error: alias count requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#count, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#count
Error: alias created requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#created, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#created
Error: alias description requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#description, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#description
Error: alias duration requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#duration, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#duration
Error: alias encoding requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#encoding, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#encoding
Error: alias role requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#role, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#role
Error: alias url requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#url, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#url
Error: alias version requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#version, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#version
Error: alias bitsPerSample requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#bitsPerSample, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#bitsPerSample
Error: alias copyright requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#copyright, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#copyright
Error: alias date requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#date, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#date
Error: alias dateTime requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#dateTime, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#dateTime
Error: alias geo requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#geo, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#geo
Error: alias height requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#height, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#height
Error: alias width requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#width, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#width
Error: alias date requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#date, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#date
Error: alias fileOwner requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#fileOwner, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#fileOwner
Error: alias language requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#language, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#language
Error: alias length requested by several properties: http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#length, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#length
Error: alias publisher requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#publisher, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#publisher
Error: alias title requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#title, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#title
Error: alias contributor requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#contributor, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#contributor
Error: alias created requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#created, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#created
Error: alias creator requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#creator, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#creator
Error: alias description requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#description, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#description
Error: alias identifier requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#identifier, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#identifier
Error: alias lastModified requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#lastModified, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#lastModified
Error: alias version requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#version, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#version
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#fileExtension' is not defined in any rdfs ontology database.
WARNING: field 'http://strigi.sf.net/ontologies/0.9#debugParseError' is not defined in any rdfs ontology database.
/usr/lib/strigi/strigiea_ics.so
/usr/lib/strigi/strigiea_jpeg.so
/usr/lib/strigi/strigiea_vcf.so
/usr/lib/strigi/strigila_cpp.so
/usr/lib/strigi/strigila_deb.so
/usr/lib/strigi/strigila_diff.so
/usr/lib/strigi/strigila_mobi.so
/usr/lib/strigi/strigila_namespaceharvester.so
/usr/lib/strigi/strigila_po.so
/usr/lib/strigi/strigila_txt.so
/usr/lib/strigi/strigila_xpm.so
/usr/lib/strigi/strigita_au.so
/usr/lib/strigi/strigita_audible.so
/usr/lib/strigi/strigita_avi.so
/usr/lib/strigi/strigita_dds.so
/usr/lib/strigi/strigita_dvi.so
/usr/lib/strigi/strigita_font.so
/usr/lib/strigi/strigita_gif.so
/usr/lib/strigi/strigita_ico.so
/usr/lib/strigi/strigita_mp4.so
/usr/lib/strigi/strigita_pcx.so
/usr/lib/strigi/strigita_rgb.so
/usr/lib/strigi/strigita_sid.so
/usr/lib/strigi/strigita_ts.so
/usr/lib/strigi/strigita_wav.so
/usr/lib/strigi/strigita_xbm.so
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#usesNamespace' is not defined in any rdfs ontology database.
WARNING: field 'translation.total' is not defined in any rdfs ontology database.
WARNING: field 'translation.translated' is not defined in any rdfs ontology database.
WARNING: field 'translation.untranslated' is not defined in any rdfs ontology database.
WARNING: field 'translation.obsolete' is not defined in any rdfs ontology database.
WARNING: field 'diff.stats.modify_file_count' is not defined in any rdfs ontology database.
WARNING: field 'diff.first_modify_file' is not defined in any rdfs ontology database.
WARNING: field 'content.format_subtype' is not defined in any rdfs ontology database.
WARNING: field 'content.generator' is not defined in any rdfs ontology database.
WARNING: field 'diff.stats.hunk_count' is not defined in any rdfs ontology database.
WARNING: field 'diff.stats.insert_line_count' is not defined in any rdfs ontology database.
WARNING: field 'diff.stats.modify_line_count' is not defined in any rdfs ontology database.
WARNING: field 'diff.stats.delete_line_count' is not defined in any rdfs ontology database.
WARNING: field 'translation.fuzzy' is not defined in any rdfs ontology database.
WARNING: field 'translation.last_translator' is not defined in any rdfs ontology database.
WARNING: field 'translation.translation_date' is not defined in any rdfs ontology database.
WARNING: field 'translation.source_date' is not defined in any rdfs ontology database.
WARNING: field 'http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#colorCount' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#formatSubtype' is not defined in any rdfs ontology database.
WARNING: field 'http://www.semanticdesktop.org/ontologies/nfo#bitsPerSample' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#audioSampleDataType' is not defined in any rdfs ontology database.
WARNING: field 'content.mime_type' is not defined in any rdfs ontology database.
WARNING: field 'audio.title' is not defined in any rdfs ontology database.
WARNING: field 'audio.artist' is not defined in any rdfs ontology database.
WARNING: field 'todo.audio.narrator' is not defined in any rdfs ontology database.
WARNING: field 'media.codec' is not defined in any rdfs ontology database.
WARNING: field 'todo.audible.user_id' is not defined in any rdfs ontology database.
WARNING: field 'todo.audible.user_alias' is not defined in any rdfs ontology database.
WARNING: field 'audio.duration' is not defined in any rdfs ontology database.
WARNING: field 'content.description' is not defined in any rdfs ontology database.
WARNING: field 'content.copyright' is not defined in any rdfs ontology database.
WARNING: field 'content.keyword' is not defined in any rdfs ontology database.
WARNING: field 'content.creation_time' is not defined in any rdfs ontology database.
WARNING: field 'content.maintainer' is not defined in any rdfs ontology database.
WARNING: field 'content.ID' is not defined in any rdfs ontology database.
WARNING: field 'audio.channel_count' is not defined in any rdfs ontology database.
WARNING: field 'http://www.semanticdesktop.org/ontologies/nfo#colorDepth' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#colorSpace' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#compressionAlgorithm' is not defined in any rdfs ontology database.
WARNING: field 'font.family' is not defined in any rdfs ontology database.
WARNING: field 'font.weight' is not defined in any rdfs ontology database.
WARNING: field 'font.slant' is not defined in any rdfs ontology database.
WARNING: field 'font.width' is not defined in any rdfs ontology database.
WARNING: field 'font.spacing' is not defined in any rdfs ontology database.
WARNING: field 'font.foundry' is not defined in any rdfs ontology database.
WARNING: field 'content.version' is not defined in any rdfs ontology database.
WARNING: field 'content.genre' is not defined in any rdfs ontology database.
WARNING: field 'TODO_trackNumber' is not defined in any rdfs ontology database.
WARNING: field 'TODO_discNumber' is not defined in any rdfs ontology database.
WARNING: field 'content.author' is not defined in any rdfs ontology database.
WARNING: field 'content.comment' is not defined in any rdfs ontology database.
WARNING: field 'audio.album' is not defined in any rdfs ontology database.
WARNING: field 'TODO_audio.albumartist' is not defined in any rdfs ontology database.
WARNING: field 'content.links' is not defined in any rdfs ontology database.
WARNING: field 'TODO_content.purchaser' is not defined in any rdfs ontology database.
WARNING: field 'TODO_content.purchasedate' is not defined in any rdfs ontology database.
WARNING: field 'media.duration' is not defined in any rdfs ontology database.
WARNING: field 'TODO_video.duration' is not defined in any rdfs ontology database.
WARNING: field 'av.audio_codec' is not defined in any rdfs ontology database.
WARNING: field 'av.video_codec' is not defined in any rdfs ontology database.
WARNING: field 'content.thumbnail' is not defined in any rdfs ontology database.
WARNING: field 'user.rating' is not defined in any rdfs ontology database.
WARNING: field 'image.width' is not defined in any rdfs ontology database.
WARNING: field 'image.height' is not defined in any rdfs ontology database.
WARNING: field 'media.sample_rate' is not defined in any rdfs ontology database.
WARNING: field 'media.sample_format' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#artist' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#albumTrackCount' is not defined in any rdfs ontology database.
WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#musicAlbum' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#genre' is not defined in any rdfs ontology database.
WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#composer' is not defined in any rdfs ontology database.
WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#trackNumber' is not defined in any rdfs ontology database.
WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#setNumber' is not defined in any rdfs ontology database.
WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#performer' is not defined in any rdfs ontology database.
WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#internationalStandardRecordingCode' is not defined in any rdfs ontology database.
WARNING: field 'Product Id' is not defined in any rdfs ontology database.
WARNING: field 'Events' is not defined in any rdfs ontology database.
WARNING: field 'Journals' is not defined in any rdfs ontology database.
WARNING: field 'Todos' is not defined in any rdfs ontology database.
WARNING: field 'Todos Completed' is not defined in any rdfs ontology database.
WARNING: field 'Todos Overdue' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#ccdWidth' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#focusDistance' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#targetQuality' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#givenName' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#familyName' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#emailAddress' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homepageContactURL' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#contentComment' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#cellPhoneNumber' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homePhoneNumber' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#workPhoneNumber' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#faxPhoneNumber' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#phoneNumber' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homePostalAddress' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#workPostalAddress' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#postalAddress' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#honorificPrefix' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#honorificSuffix' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#subject' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#title' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#author' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#description' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#copyright' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#isContentEncrypted' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#contentKeyword' is not defined in any rdfs ontology database.
WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#paragraphCount' is not defined in any rdfs ontology database.
WARNING: field 'http://rdf.openmolecules.net/0.9#moleculeCount' is not defined in any rdfs ontology database.
kDebugStream called after destruction (from void KDirWatchPrivate::removeEntry(KDirWatch*, KDirWatchPrivate::Entry*, KDirWatchPrivate::Entry*) file /home/phil/kdemod/core/kdelibs/src/kdelibs-4.3.3/kio/kio/kdirwatch.cpp line 901)
Cancelled INotify (fd 9, 1) for "/home/sebinaj/.local/share"
^C
I am using KDEmod + Arch
Last edited by absolutevoid (2009-11-05 17:23:59)There's a large thread around about this Dolphin problem.
Disabling Nepomuk in System Settings has proved to be the
cure in many cases.
Deej -
Report taking too much time in the portal
Hi freiends,
we have developed a report on the ods,and we publish the same on the portal.
the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
is there any way to sort out this issue,like can we send the report to the individual user's mail id
so that they can not log in to the portal
or can we create the same report on the cube.
what could be the main difference if the report made on the cube or ods?
please help me
thanks in advance
sridathHi
Try this to improve performance of query
Find the query Run-time
where to find the query Run-time ?
557870 'FAQ BW Query Performance'
130696 - Performance trace in BW
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memory will decrease the loading time of the report.
Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
Open the Aggregates...and observe VALUATION and USAGE columns.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
In usage column,we will come to know how far the aggregate has been used in query.
Thus we can check the performance of the aggregate.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
performance ISSUE related to AGGREGATE
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
202469 - Using aggregate check tool
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
6
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Business Intelligence Journal Improving Query Performance in Data Warehouses
http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Assign points if useful
Cheers
SM -
The queries takes too much time
Hi,
I want to compare time of queries between Spatial and PostGIS, but Spatial take too much time. I created spatial indexes in both databases so I have no idea where can be the problem :-(. I'm actually student and I try this for my essay, but I start with Oracle by myself so I'll appreciate some help.
I have table with cca 7000 polygons and I want to create new table like dissolve these polygons by attribute katuze.
Query in Spatial takes 8506 s :
CREATE TABLE dp_diss_p AS
SELECT KATUZE_KOD, SDO_AGGR_UNION(
MDSYS.SDOAGGRTYPE(a.geom, 0.005))GEOM
FROM dp_plochy a GROUP BY a.KATUZE_KOD;
Query in PostGIS takes 10.149 s :
CREATE TABLE plochy_diss AS
SELECT st_union(geom) AS geom
FROM plochy
GROUP BY katuze_kod;
I really don't know what to do, please give somebody me some advice :-(. Please excuse my English.
Thx EvaI tryed use the sdo_aggr_set_union like this:
CREATE OR REPLACE FUNCTION get_geom_set (table_name VARCHAR2,
column_name VARCHAR2,
predicate VARCHAR2 := NULL)
RETURN SDO_GEOMETRY_ARRAY DETERMINISTIC AS
type cursor_type is REF CURSOR;
query_crs cursor_type ;
g SDO_GEOMETRY;
GeometryArr SDO_GEOMETRY_ARRAY;
where_clause VARCHAR2(2000);
BEGIN
IF predicate IS NULL
THEN
where_clause := NULL;
ELSE
where_clause := ' WHERE ';
END IF;
GeometryArr := SDO_GEOMETRY_ARRAY();
OPEN query_crs FOR ' SELECT ' || column_name ||
' FROM ' || table_name ||
where_clause || predicate;
LOOP
FETCH query_crs into g;
EXIT when query_crs%NOTFOUND ;
GeometryArr.extend;
GeometryArr(GeometryArr.count) := g;
END LOOP;
RETURN GeometryArr;
END;
CREATE TABLE dp_diss_p AS
SELECT KATUZE_KOD, sdo_aggr_set_union (get_geom_set ('dp_plochy', 'geom','CTVUK_KOD <>''1'''), .0005 ) geom
FROM dp_plochy a GROUP BY a.KATUZE_KOD;
It takes 21.721 s, but it return type of collection and I can't export it to *.shp because Georaptor export it like 'unknown' . Has anybody idea what can I do with it to export it like polygon?
Eva -
SELECT query takes too much time! Y?
Plz find my SELECT query below:
select w~mandt
wvbeln wposnr wmeins wmatnr wwerks wnetwr
wkwmeng wvrkme wmatwa wcharg w~pstyv
wposar wprodh wgrkor wantlf wkztlf wlprio
wvstel wroute wumvkz wumvkn wabgru wuntto
wawahr werdat werzet wfixmg wprctr wvpmat
wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
wbedae wcuobj w~mtvfp
xetenr xwmeng xbmeng xettyp xwepos xabart
x~edatu
xtddat xmbdat xlddat xwadat xabruf xetart
x~ezeit
into table t_vbap
from vbap as w
inner join vbep as x on xvbeln = wvbeln and
xposnr = wposnr and
xmandt = wmandt
where
( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
( ( ( erdat > pre_dat and erdat < p_syndt ) or
( erdat = p_syndt and erzet <= p_syntm ) ) ) and
w~matnr in s_matnr and
w~pstyv in s_itmcat and
w~lfrel in s_lfrel and
w~abgru = ' ' and
w~kwmeng > 0 and
w~mtvfp in w_mtvfp and
x~ettyp in w_ettyp and
x~bdart in s_req_tp and
x~plart in s_pln_tp and
x~etart in s_etart and
x~abart in s_abart and
( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
The problem: It takes too much time while executing this statement.
Could anybody change this statement and help me out to reduce the DB Access time?
ThxWays of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
4. For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
5. Use Select Single if all primary key fields are supplied in the Where condition .
If all primary key fields are supplied in the Where conditions you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
2. For all frequently used Select statements, try to use an index.
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
3. Using buffered tables improves the performance considerably.
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
3. Instead of using nested Select loops it is often better to use subqueries.
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP. -
R3load ttaking too much time when table REPOSRC is loaded
Hello,
I am installing the SAP ECC 6.0 SR2 on SUN Solaris 10 on DB2 V9.1. 17 jobs of the 19 have been completed in ABAP Import phase but it is taking too much time while doing this SAPSSEXC. It is running aroung 10 hours. There is no error is giving. So I have canceled the SAP Installation. After that I have started through manual OS command
/sapmnt/<SID>/exe/R3load -dbcodepage 4102 -i /<instdir>/SAPSSEXC.cmd -l /<instdir>/SAPSSEXC.log -stop_on_error -merge_bck
It is also running around 9 hours. I do not why it is happening and when it will be completed.
Can you help me what will check for doing this job fast or can help me how to resolve this issue?
I have checked these SAP notes 454368 and 455195
If i change any DB2 parameter, I have to restart the DB2 Database. What will I do? I can not understand what to do now.
Please help me ASAP.
Thanks
Gautam PoddarHello,
running the R3load import step manually you might try to add the option
-loadprocedure fast LOAD
Pay attention to write the LOAD in capital letters!
This will invoke the LOAD-API whenever possible.
This should save some time on "normal" tables without LOB-columns.
For your table REPOSRC that has a BLOB column the LOAD-API will not be taken.
So I am sorry this will not work for your special case.
(Thanks for the hint to Frank-Martin Haas)
Kind regards
Waldemar Gaida
Edited by: Waldemar Gaida on Jan 10, 2008 8:26 AM -
Report is taking too much time when running from parameter form
Dear All
I have developed report in oracle reports bulider 10g. while running it from report builder data is coming very fast.
But, If it is running from parameter form it is taking too much time to format report in PDF.
Please suggest any configuration or setting if anybody is having Idea.
ThanksHi,
The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
If you still cannot determine the difference then trace both sessions.
Rod West -
Application diagnostics taking too much time
EBS Version : 12.1.3
Application diagnostics taking too much time. How do i investigate why its taking too much time? and then also I want to know How do i cancel this request?Application diagnostics taking too much time. How do i investigate why its taking too much time? and then also I want to know How do i cancel this request?
Is this happening for all diagnostics scripts or specific one(s) only?
E-Business Suite Diagnostics Installation Guide (Doc ID 167000.1)
E-Business Suite Diagnostics References for R12 (Doc ID 421245.1)
E-Business Suite Diagnostic Tools FAQ and Troubleshooting Guide for Release 11i and R12 (Doc ID 235307.1)
You can it from OAM and you might need to cancel the database session from the backend as well.
Thanks,
Hussein -
Data Dictionary query takes too much time.
Hello,
I am using ORACLE DATABASE 11g.
The below query is taking too much time to execute and give the output. As i have tried a few Oracle sql hints but it dint worked out.
SELECT
distinct B.TABLE_NAME, 'Y'
FROM USER_IND_PARTITIONS A, USER_INDEXES B, USER_IND_SUBPARTITIONS C
WHERE A.INDEX_NAME = B.INDEX_NAME
AND A.PARTITION_NAME = C.PARTITION_NAME
AND C.STATUS = 'UNUSABLE'
OR A.STATUS = 'UNUSABLE'
OR B.STATUS = 'INVALID';Please guide me what to do ? to run this query in a fast pace mode ...
thanks in advance ..Your query is incorrect. It will return ALL tables if A.STATUS = 'UNUSABLE' or B.STATUS = 'INVALID'. Most likely you meant:
SELECT
distinct B.TABLE_NAME, 'Y'
FROM USER_IND_PARTITIONS A, USER_INDEXES B, USER_IND_SUBPARTITIONS C
WHERE A.INDEX_NAME = B.INDEX_NAME
AND A.PARTITION_NAME = C.PARTITION_NAME
AND (C.STATUS = 'UNUSABLE'
OR A.STATUS = 'UNUSABLE'
OR B.STATUS = 'INVALID');But the above will return subpartitioned tables with invalid/unusable indexes. It will not return non-subpartitioned partitioned tables with invalid/unusable indexes/index partitions same as non-partitioned tables with invalid/unusable indexes. If you want to get any table with invalid/unusable indexes you need to outer join which will hurt performance even more. I suggest you use UNION:
SELECT DISTINCT TABLE_NAME,
'Y'
FROM (
SELECT INDEX_NAME,'Y' FROM USER_INDEXES WHERE STATUS = 'INVALID'
UNION ALL
SELECT INDEX_NAME,'Y' FROM USER_IND_PARTITIONS WHERE STATUS = 'UNUSABLE'
UNION ALL
SELECT INDEX_NAME,'Y' FROM USER_IND_SUBPARTITIONS WHERE STATUS = 'UNUSABLE'
) A,
USER_INDEXES B
WHERE A.INDEX_NAME = B.INDEX_NAME
/SY. -
Auto Invoice Program taking too much time : problem with update sql
Hi ,
Oracle db version 11.2.0.3
Oracle EBS version : 12.1.3
Though we have a SEV-1 SR with oracle we have not been able to find much success.
We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it. I am attaching the explain plan for the for same. Its an update query. Please guide.
Plan
UPDATE STATEMENT ALL_ROWSCost: 0 Bytes: 124 Cardinality: 1
50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
27 FILTER
26 HASH JOIN Cost: 8,937,633 Bytes: 4,261,258,760 Cardinality: 34,364,990
24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413 Bytes: 446,744,870 Cardinality: 34,364,990
23 SORT UNIQUE Cost: 8,618,413 Bytes: 4,042,339,978 Cardinality: 34,364,990
22 UNION-ALL
9 FILTER
8 SORT GROUP BY Cost: 5,643,052 Bytes: 3,164,892,625 Cardinality: 25,319,141
7 HASH JOIN Cost: 1,640,602 Bytes: 32,460,436,875 Cardinality: 259,683,495
1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
6 HASH JOIN Cost: 853,567 Bytes: 22,544,143,440 Cardinality: 214,706,128
4 HASH JOIN Cost: 536,708 Bytes: 2,357,000,550 Cardinality: 29,835,450
2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008 Bytes: 1,163,582,550 Cardinality: 29,835,450
3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314 Bytes: 1,193,526,000 Cardinality: 29,838,150
5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951 Bytes: 3,123,197,116 Cardinality: 120,122,966
21 FILTER
20 SORT GROUP BY Cost: 2,975,360 Bytes: 877,447,353 Cardinality: 9,045,849
19 HASH JOIN Cost: 998,323 Bytes: 17,548,946,769 Cardinality: 180,916,977
13 VIEW VIEW AR.index$_join$_027 Cost: 108,438 Bytes: 867,771,256 Cardinality: 78,888,296
12 HASH JOIN
10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206 Bytes: 867,771,256 Cardinality: 78,888,296
11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322 Bytes: 867,771,256 Cardinality: 78,888,296
18 HASH JOIN Cost: 748,497 Bytes: 3,281,713,302 Cardinality: 38,159,457
14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
17 HASH JOIN Cost: 519,713 Bytes: 1,969,317,900 Cardinality: 29,838,150
15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822 Bytes: 716,115,600 Cardinality: 29,838,150
16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847 Bytes: 1,253,202,300 Cardinality: 29,838,150
25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552 Bytes: 5,158,998,615 Cardinality: 46,477,465
41 SORT GROUP BY Bytes: 75 Cardinality: 1
40 FILTER
39 MERGE JOIN CARTESIAN Cost: 11 Bytes: 75 Cardinality: 1
35 NESTED LOOPS Cost: 8 Bytes: 50 Cardinality: 1
32 NESTED LOOPS Cost: 5 Bytes: 30 Cardinality: 1
29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3 Bytes: 22 Cardinality: 1
28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2 Cardinality: 1
31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2 Bytes: 133,114,520 Cardinality: 16,639,315
30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 20 Cardinality: 1
33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2 Cardinality: 1
38 BUFFER SORT Cost: 9 Bytes: 25 Cardinality: 1
37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 25 Cardinality: 1
36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
49 SORT GROUP BY Bytes: 48 Cardinality: 1
48 FILTER
47 NESTED LOOPS
45 NESTED LOOPS Cost: 7 Bytes: 48 Cardinality: 1
43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4 Bytes: 20 Cardinality: 1
42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3 Cardinality: 1
44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 28 Cardinality: 1
As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
RegardsHi Paul, My bad. I am sorry I missed it.
Query as below :
UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
I understand that this could be a seeded query but my attempt is to tune it.
Regards -
Hi all
I'm quite new to database administration.my problem is that i'm trying to import dump file but one of the table taking too much time to import .
Description::
1 Export taken from source database which is in oracle 8i character set is WE8ISO8859P1
2 I am taking import in 10 g with character set utf 8 and national character set is also same.
3 dump file is about 1.5 gb.
4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
5 while taking a import some table get import very fast bt at perticular table it get very slow
please help me thanks in advance.......Hello,
4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
5 while taking a import some table get import very fast bt at perticular table it get very slow For the point *4* it's typically due to the CHARACTER SET conversion.
You export data in WE8ISO8859P1 and import in UTF8. In WE8ISO8859P1 characters are encoded in *1 Byte* so *1 CHAR = 1 BYTE*. In UTF8 (Unicode) characters are encoded in up to *4 Bytes* so *1 CHAR > 1 BYTE*.
For this reason you'll have to modify the length of your CHAR or VARCHAR2 Columns, or add the CHAR option (by default it's BYTE) in the column datatype definition of the Tables. For instance:
VARCHAR2(100 CHAR)The NLS_LENGTH_SEMANTICS parameter may be used also but it's not very well managed by export/Import.
So, I suggest you this:
1. set NLS_LENGTH_SEMANTICS=CHAR on your target database and restart the database.
2. Create from a script all your Tables (empty) on the target database (without the indexes and constraints).
3. Import the datas to the Tables.
4. Import the Indexes and constraints.You'll have more information on the following Note of MOS:
Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]For the point *5* it may be due to the conversion problem you are experiencing, it may also due to some special datatype like LONG.
Else, I have a question, why do you choose UTF8 on your Target database and not AL32UTF8 ?
AL32UTF8 is recommended for Unicode uses.
Hope this help.
Best regards,
Jean-Valentin
Maybe you are looking for
-
How do I locate the Admin Password
I need to find and change the admin password so I can download mavericks. contacting the last Admin person has yielded no info
-
I saw an article saying so: http://www.fileinfo.com/extension/8bf but when I placed one in my plugins folder and restarted PS, it does not show up.
-
I need to be able to read data from and insert data into Excel. Is there a way of doing this without extra software?
-
How to get soap message sent out
Hi, I used a client proxy in ABAP to consume a web service. However I got errors saying there is 'Unexpected elemement' in SOAP message. To find out the root cause, i would like to get the soap message sent out from the proxy. Is there any way to do
-
Hi all, Can u please help me to find the maximum size of my schema. Thanks SKP