Enhancing performance of BLOB loading

Hi,
I am loading one row of BLOB data using DBMS_LOB Package. To load 5 GB of BLOB data its taking 8:53 Minutes. Due to some restriction in the target database, SQL *Loader can't be used.
Is there any other option/method to fasten the performance of the above said BLOB load? Any option with you is welcome except SQL *Loader. Thank you.. :)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

user637568 wrote:
Is there any other option/method to fasten the performance of the above said BLOB load? Any option with you is welcome except SQL *Loader. Thank you.. :)Does not matter what you use, there are a bunch of static layers here - and these impacts the performance mostly.
On the client side - the data of the LOB needs to be read from where? Disk? Mapped drive? The speed the client can read this data is largely dependent on how fast that storage can deliver the data to it.
The client needs to ship that data across to the Oracle server process. What is the network latency between client and server? How many routes and hops do it take for a client packet to reach the server?
The client's Oracle driver packages LOB payload into TCP packages. How effective is this? Does it stuff a TCP packet as full as possible? Ideally for network performance, one wants large packets carrying the LOB data to the server - and not a gazillion small packets each with only a couple of bytes of LOB data.
On the server side, Oracle needs to write that LOB data stream it receives to disk and commit it. Just how effective is I/O on the storage used by Oracle? How many other processes are hitting that same I/O layer?
You need to look at all these layers and ensure that each one of these is configured and used optimally. The elapsed time of the end-to-end process will be determined by the slowest layer. In such a case, one can consider using parallel processing (assuming that the layers have the capacity).
For example, the client reads the source data, sees it is 8GB in size and decides to use 16 processes to each read and transfer 512MB concurrently to the server - where the PL/SQL code on the server side has the intelligence to re-assemble these 16 chunks (in 16 different client sessions) into a single 8GB chunk for final storage.

Similar Messages

  • Performance problem in loading the Mater data attributes 0Equipment_attr

    Hi Experts,
    We have a Performance problem in loading the Mater data attributes 0Equipment_attr.It is running with psuedo delta(full update) the same infopakage runs with diffrent selections.The problme we are facing is the load is running 2 to 4 hrs in the US morning times and when coming to US night times it is loading for 12-22 hrs and gettin sucessfulluy finished. Even it pulls (less records which are ok )
    when i checked the R/3 side job log(SM37) the job is running late too. it shows the first and second i- docs coming in less time and the next 3and 4 i- docs comes after 5-7 hrs gap to BW and saving in to PSA and then going to info object.
    we have userexits for the data source and abap routines but thay are running fine in less time and the code is not much complex too.
    can you please explain and suggest the steps in r/3 side and bw side. how can i can fix this peformance issue
    Thanks,
    dp

    Hi,
    check this link for data load performance. Under "Extraction Performance" you will find many useful hints.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    Regards
    Andreas

  • Enhancing performance of system

    I am developing a CCP CAN based application. However I am running to performance difficulties (overflow)when only trying to read (multiread) a maximum of 30 frames every 10ms.
    I am using queues to buffer the CAN data and processing it on a trigger (occurrence). The processing is quite intensive to decode the data however.
    I have taken basic steps to enhance performance such as reducing processes in loops, reduce polling time in loops, etc.
    How can I improve further and are there specific Windows settings I should consider to increase processor time for the LabVIEW application.

    Hello Elmo,
    Thank you for contacting National Instruments.
    There are a number of causes that could lead to an overflow error. Please refer to the following knowledge bases concerning NI-CAN queues.
    http://digital.ni.com/public.nsf/websearch/FB1B03046CE09C53862568FE0052EAFB?OpenDocument
    http://digital.natinst.com/public.nsf/websearch/CC36BA1DD421EC23862569060054F6E2
    Some ways around the overflow error are to increase buffer size, decrease scan rate, or get a faster processor. Your buffer size should be 2 - 4 times bigger than your scan rate.
    I hope this helps.
    Sean C.
    Applications Engineer
    National Instruments

  • Assigning buttons on Lenovo USB Enhanced Performance Keyboard

    I have recently made Mozilla Firefox my default browser (it's much quicker to start up than Explorer), however I would still like the Internet button on my USB Enhanced Performance Keyboard to open Explorer, which my son prefers (it currently opens Firefox). When I try to reconfigure the button I can't find Internet Explorer amongst the choices I am given.
    I have reconfigured the Lock desktop button to open Firefox. I am sure I had the keyboard set up to do what I want it to do before I reformatted the hard drive yesterday.

    I'm glad to report that I've been using this new driver for a month now with no issues... 

  • Keyboard - Enhanced Performance vs Preferred Pro - reliability, tactility, etc.

    I'm comparing...
    73P2620  Lenovo Enhanced Performance USB Keyboard - US English  39.00
    73P5220  Lenovo Preferred Pro USB Keyboard - US English  29.00
    I don't need the extra USB hub capability, so judging from the specs, I would choose the Preferred Pro model.  However, can anyone having experience using either of them, chime in and comment on everything else that matters - reliability, tactility and feel, lags, joy of using, etc?
    Thank you.
    ~Paul
    Lenovo ThinkPad T410s - 2901CTO
    IBM ThinkPad T41p - 2373GGU
    Solved!
    Go to Solution.

    apart from the extra features, the feel and tactility of the two keyboard are the same.
    The keyboards are okay, but the feel and tactility is not the industry best, reliability is okay they are quite reliable. I rather prefer the Microsoft keyboards. 
    Regards,
    Jin Li
    May this year, be the year of 'DO'!
    I am a volunteer, and not a paid staff of Lenovo or Microsoft

  • Deploying USB Enhanced Performance [Wired] Keyboard settings

    Hi. Is there any way to deploy the settings of the USB Enhanced Performance Keyboard settings so that all computers with the keyboard get the default settigs? Got 500+ computers to deploy.
    Solved!
    Go to Solution.

    Hi,
    there is no easy solution however this might be worth trying sincle I see it as feasible and maybe you can even use Migration Assistant if you do not have any other tool (I'm not expert on Migration Assistant).
    1. Manually export the registry settings from the current user.
    2. Change “HKEY_CURRENT_USER” to “HKEY_LOCAL_MACHINE”, in the exported registry file.
    3. Remove any reference to “\users\username”.
    4. Then, import the you modified registry file to another machine.
    SEE ATTACHAED FILE PLS.
    Let me know if this is sufficient and it helps.
    Jan Solar
    Product Marketing
    (not a technical support)
    http://blog.lenovo.com/author/jsolar/
    Attachments:
    reg_settings.png ‏223 KB

  • Router can perform static route load balance

    Dear All
    I am not sure a question. I need your idea and help. The question is if the router can perform static route load balance. I tested it. The result showed No. If you have any experience on it, could share it with me. I also post my result here. Thank you

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    Normally they can, but you generally need different next hops.  How did you "test".

  • Enhanced Performance Wireless Keyboard

    I got recently an Enhanced Performance Wireless Keyboard and Mouse with plan to use it with my new ThinkPad W500. Mouse works without problems.
    However the keyboard works properly when Language for non-Unicode programs is set to US English. When language changed to Arabic, keyboard loses characters, freezes for several seconds, etc. I tried it with W500 with Vista Business SP1, T41p with XP SP3, and - more interesting - with Dell Latitude D420 with XP SP3. On all those laptops I got identical results. With keyboard language set to Arabic, the Arabic characters do show when typing in Word or Notepad.
    However, when connected to my desktop (DELL Optiplex GX520, XP SP3) the keyboard works correctly with US English and Arabic selected as non-Unicode locale.
    Keyboard P/N is 41A5219, S/N 00012663.
    Tried other wireless keyboard/mouse set (Genius TwinTouch LuxeMate Pro) - it works OK on T41p language independent.
    Message Edited by amw on 03-17-2009 10:20 PM
    Message Edited by amw on 03-17-2009 10:24 PM
    Thinkpad W500 4058-CTO Win 7 x64 Professional

    We purchased 10 ThinkPads with the same keyboard and mouse, the keyboard is a bit slow, but the mouse is absolutely horrible, the choppy motion and falling asleep all the time is killing productivity.  Has anyone had any success with the fix for this?  need help ASAP.

  • Lenovo Enhanced Performance USB Keyboard - Qualitätsproblem oder .....?

    Ich habe mir letzte Woche bei einem Händler in einem namhaften Onlineauktionhaus eine SK-8815 (Lenovo Enhanced Performance USB, PN: 41A4975) im deutschen Tastaturlayout als originalverpackte Neuware gekauft. Als die Lieferung vorgestern hier ankam, musste ich feststellen, dass es an der Qualität der Tastatur ziemlich hapert: die Tasten zur Steuerung des Musikplayers oben links sitzen extrem lose.Was aber noch viel mehr auffällt, ist die Beschriftung der Tastatur - diese ist ziemlich schief aufgebracht und auch unterschiedlich hell.Ich habe mal versucht zumindest die Positionierung der Aufdrucke mit der Kamera meines Smartphones einzufangen:Am meisten fällt es beim 'Ö' und den anderen Umlauten auf, die horizontal und vertikal ziemlich aus der Reihe fallen. Ich habe auch einmal die Seriennummer der Tastatur hier überprüft:Lenovo - Garantiestatus überprüfenwobei mir gemeldet wurde, dass nichts zu vermelden sei 'Kein Ergebnis - Bitte prüfen und erneut versuchen'. Ist nun einfach nur die Qualität relativ mau oder bin ich möglicherweise einer Produktfälschung aufgesessen? Sofern es ein gefälschtes Produkt ist, ist anerkennenswert, dass es einwandfrei mit der Konfigurationssoftware für das EnhancedPerformance Keyobard von der Lenovo-Homepage zusammenarbeitet.  

    I have 2 long  .LOG files.
    If need I will upload them.

  • Performing an HRMS Load

    Hi friends,
    Im new to informatica OBIA DAC world and im learning it up to now. Im in the verge of performing ETL load for HR analytics.
    I have a Oracle Source r12 instance with Oracle 11g database and my Oracle target database is 10.2.0.1.0. I need to perform an HRMS load from my source to target using Informatica. So, as of first step how will i need to connect and import source r12 instance hrms data's to my DAC to perform ETL inorder to load my target database.
    Hope u understand.
    Thanks in Advance.
    Regards,
    Saro

    Dear Svee,
    Thanks for the reply again, yes like you said i checked the custom properties of my Integration service
    It is like below
    Name: value
    SiebelUnicodeDB: apps@test biapps@obia
    overrideMpltVarWithMapVar: yes
    ServerPort: 4006
    SiebleUnicodeDBFlag: NoAs it is already set to 'Yes'.
    For one of my failed Workflow "SDE_ORA_Flx_EBSValidationTableDataTmpLoad" in the workflow monitor i right clicked it and selected Get Workflow Log, in that i got those following details like
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36435 : Starting execution of workflow [SDE_ORA_Flx_EBSSegDataTmpLoad] in folder [SDE_ORA11510_Adaptor] last saved by user [Administrator].
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_44206 : Workflow SDE_ORA_Flx_EBSSegDataTmpLoad started with run id [463], run instance name [], run type [Concurrent Run Disabled].
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_44195 : Workflow [SDE_ORA_Flx_EBSSegDataTmpLoad] service level [SLPriority:5,SLDispatchWaitTime:1800].
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_44253 : Workflow started. Clients will be notified
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36330 : Start task instance [Start]: Execution started.
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36318 : Start task instance [Start]: Execution succeeded.
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36505 : Link [Start --> SDE_ORA_Flx_EBSSegDataTmpLoad]: empty expression string, evaluated to TRUE.
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36388 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] is waiting to be started.
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36682 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: started a process with pid [4732] on node [node01_BIAPPS].
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36330 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: Execution started.
    2012-07-23 10:19:02 : ERROR : (1164 | 1380) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : VAR_27086 : Cannot find specified parameter file [D:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\SDE_ORA11510_Adaptor.SDE_ORA_Flx_EBSSegDataTmpLoad.txt] for [session [SDE_ORA_Flx_EBSSegDataTmpLoad.SDE_ORA_Flx_EBSSegDataTmpLoad]].
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6793 Fetching initialization properties from the Integration Service. : (Mon Jul 23 10:19:01 2012)]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [DISP_20305 The [Preparer] DTM with process id [4732] is running on node [node01_BIAPPS].
    : (Mon Jul 23 10:19:01 2012)]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [PETL_24036 Beginning the prepare phase for the session.]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6721 Started [Connect to Repository].]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6722 Finished [Connect to Repository].  It took [0.21875] seconds.]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6794 Connected to repository [Oracle_BI_DW_Base] in domain [Domain_BIAPPS] as user [Administrator] in security domain [Native].]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6721 Started [Fetch Session from Repository].]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6722 Finished [Fetch Session from Repository].  It took [0.140625] seconds.]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6793 Fetching initialization properties from the Integration Service. : (Mon Jul 23 10:19:02 2012)]
    2012-07-23 10:19:02 : ERROR : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [CMN_1761 Timestamp Event: [Mon Jul 23 10:19:02 2012]]
    2012-07-23 10:19:02 : ERROR : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [PETL_24049 Failed to get the initialization properties from the master service process for the prepare phase [Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: Unable to read variable definition from parameter file [D:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\SDE_ORA11510_Adaptor.SDE_ORA_Flx_EBSSegDataTmpLoad.txt].] with error code [32694552].]
    2012-07-23 10:19:04 : ERROR : (1164 | 364) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36320 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: Execution failed.
    2012-07-23 10:19:04 : WARNING : (1164 | 364) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36331 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] failed and its "fail parent if this task fails" setting is turned on.  So, Workflow [SDE_ORA_Flx_EBSSegDataTmpLoad] will be failed.
    2012-07-23 10:19:04 : ERROR : (1164 | 364) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36320 : Workflow [SDE_ORA_Flx_EBSSegDataTmpLoad]: Execution failed.
    Whether is this log is pointing out to the correct reason for that task to be failed. If so, from the above log what is the reason for the failure for that workflow.
    Kindly help me with this svee.
    Thanks for your help.
    Regards,
    Saro

  • Blob loaded but unable to be viewed.

    Hello friends
    I have a peculiar problem, maybe you can help me find an answer
    I have an application that uses blobs on my local machine. I have a hosted service provider who has a large database and offers a workspace that can be used by me. I am trying to load my application data to this hosting site.
    We used the exp utility to send the data over to the hosting machine. The data is imported correctly at the other end and looks like the blobs too.
    For some reason the application is not able to show the docs that were stored in the blobs.
    My friend on the appshosting side checked on our database, and found the BLOBs did come over from the last import. Not sure why your application is not picking up the blobs.
    Here it shows on the hosting database ECM_POLICY to prove that the docs came over. But the application doesnot show the policy document when you click on the link to the doc.
    SQL> select * from user_lobs where segment_name = 'ECM_POLICY';
    TABLE_NAME COLUMN_NAME
    SEGMENT_NAME TABLESPACE_NAME
    INDEX_NAME CHUNK PCTVERSION RETENTION FREEPOOLS
    CACHE LOGGING IN_ FORMAT PAR
    ECM_POLICY UPLOADED_POLICY_DOC
    SYS_LOB0000263416C00018$$ GRACE_IM
    SYS_IL0000263416C00018$$ 8192 10 900
    NO YES YES NOT APPLICABLE NO
    ECM_POLICY UPLD_SUPP_DOC_1
    SYS_LOB0000263416C00023$$ GRACE_IM
    SYS_IL0000263416C00023$$ 8192 10 900
    NO YES YES NOT APPLICABLE NO
    ECM_POLICY UPLD_SUPP_DOC_2
    SYS_LOB0000263416C00028$$ GRACE_IM
    SYS_IL0000263416C00028$$ 8192 10 900
    NO YES YES NOT APPLICABLE NO
    SQL>select * from user_segments where segment_name in ('SYS_LOB0000263416C00018$$', 'SYS_IL0000263416C00018$$')
    SQL> /
    SEGMENT_NAME PARTITION_NAME SEGMENT_TYPE
    TABLESPACE_NAME BYTES BLOCKS EXTENTS INITIAL_EXTENT
    NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE FREELISTS FREELIST_GROUPS
    BUFFER_
    SYS_IL0000263416C00018$$ LOBINDEX
    GRACE_IM 65536 8 1 65536
    1 2147483645 1 1
    DEFAULT
    SYS_LOB0000263416C00018$$ LOBSEGMENT
    GRACE_IM 6291456 768 21 65536
    1 2147483645 1 1
    DEFAULT
    Would anyone be able to tell us why this could be happening
    Thanks
    Laxmi

    A blank page is usually solved by performing a power reset of the router.  Also, you may have to reboot all PCs as well after the power reset.  Since this is a problem on all PCs, it's likely a router problem unless you haven't installed network adapter drivers on any PC after the Windows 7 upgrade.
    If a power reset doesn't work, you may need to perform a factory reset using the reset button on the back of the computer.

  • Poor performance of BLOB queries using ODBC

    I'm getting very poor performance when querying a BLOB column using ODBC. I'm using an Oracle 10g database and the Oracle 10g ODBC driver on Windows XP.
    I create two tables:
    create table t1 ( x int primary key, y raw(2000) );
    create table t2 ( x int primary key, y blob );
    Then I load both tables with the same data. Then I run the following queries using ODBC:
    SELECT x, y FROM t1;
    SELECT x, y FROM t2;
    I find that the BLOB query takes about 10 times longer than the RAW query to execute.
    However, if I execute the same queries in SQL*Plus, the BLOB query is roughly as fast as the RAW query. So the problem seems to be ODBC-related.
    Has anyone else come across this problem ?
    Thanks.

    Hi Biren,
    By GUID, are you referring to the Oracle Portal product?

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Performance issue of loading a report

    Post Author: satish_nair31
    CA Forum: General
    Hi,
    I am facing performance related problem in some our reports where we have to fetch some 1-2 lakhs of reports for display. Initially we are passing the dataset as the source of data to the report. It takes hell of time to load using this technique. After that for improving the performance we are only passing the filter condition through the report viewer object. This way we have improved a lot but now also its taking some time to load reports which is not acceptable. Is there any way to improve the performance.

    Post Author: synapsevampire
    CA Forum: General
    How could you possibly know if you're in the same situation, the original poster didn't include the software or the version being used, whether it's the RDC, etc. Very likely the reason why they received no responses.
    They also referenced 2 different methods for retrieving the data, which are you using?
    The trick is to make sure that you are passing all of the WHERE conditions in the SQL to the database.
    You can probably check this using a database tool, but again, nothing technical in your post about the database either, you certainly shouldn't expect quality help.
    -k

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

Maybe you are looking for

  • How do I get Encoder Activation for Premier Elements 3.0?

    I upgraded to Windows 7 on the same computer and now Adobe wants to rob me for an upgrade in order to activate the same software I've been using.

  • Employee interaction Center issue

    Hi all,             If an employee wants to add some documents like for example ( Offer letter, salary verification letter etc ) with the help of EIC , How can we achieve/Configure it... Any inputs, highly appreciated Thanks, Rahul

  • Envy 14 Beats edition crackling sound

    I have the Envy 14 Beats Edition (dm4t-3000) that keeps having an intermintent crackling or fuzzy sound that is coming from the back right corner of the computer (around the wireless card) .  I have looked everywhere on this forum for answers but I h

  • Runbook for Mac OS X

    Hi. I am new on scorch and interested in finding more about if anyone has created any runbook to run some commands on remote mac. I am looking for a way to copy and execute a command from runbook to make it simple. Thanks,

  • Weblogic Server 12c Start up issue

    Hi , while am trying to start weblogic 12c server iin window xp professional ,it is giving the below error I am trying to learn Oracle11g SOA Suite I have configured the SOA Suite 11g Release 1 (11.1.1.6.0) ,let me know I try to work with weblogic 10