Load performance - how to read 0BWTC_C05

Hello everybody,
Right now I'm trying to gather statistics (for last two months) regarding upload (extraction) times on our BW system (SAP BW 3.5).
What I need is the entire time for a infopackage run. This must be corresponding to monitor entries.
In addition to this I need number of records.
I would like to compare on daily basis the number of records and runtime per InfoPAckage.
I have tried with Techincal Business Content: 0BWTC_C05.
Report: Utilizing WHM per InfoSource (0BWTC_C10_Q314). But I can not find the link between monitor entries and report results.
For number of records - it is fine. It is one to one
But for i.e. infosource 2lis_11_vaitm - no idea how to read the runtimes.
Any idea how to read this report?
TIA
pawel

Hi Parag!
You could put the RData file into zip package, upload it, and pass it to the right input port (Script Bundle). The zip will then get unpacked and you can load it from within your R script, the local folder where it's unpacked is called .\Script
Bundle
-Roope 

Similar Messages

  • How to improve query & loading performance.

    Hi All,
    How to improve query & loading performance.
    Thanks in advance.
    Rgrds
    shoba

    Hi Shoba
    There are lot of things to improve the query and loading performance.
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    weblogs:
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1. BW Statistics
    2. BW Workload Analysis in ST03N (Use Export Mode!)
    3. Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
    RSA1, choose Tools -> BW statistics for InfoCubes
    (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in detail?
    1. Transaction RSRT
    2. Transaction RSRTRACE
    4. Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
    3. If Buffers, I/O, CPU, memory on the database server are exhausted?
    4. If Cube compression is used regularly
    5. If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1. If the CPUs on the application server are exhausted
    2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
    8. Where can I get specific runtime information for one query?
    1. Again you can use ST03N -> BW System Load
    2. Depending on the time frame you select, you get historical data or current data.
    3. To get to a specific query you need to drill down using the InfoCube name
    4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
    values for a specific query?
    (Use Details to get the runtime segments)
    1. High Database Runtime
    2. High OLAP Runtime
    3. High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
    3. Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    3. Check if a user exit Usage is involved in the OLAP runtime?
    4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5. Check if a proper index on the inclusion table exist
    12. What can I do if a query has a high frontend runtime?
    1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3. Check if the bandwidth for WAN connection is sufficient
    and the some threads:
    how can i increse query performance other than creating aggregates
    How to improve query performance ?
    Query performance - bench marking
    may be helpful
    Regards
    C.S.Ramesh
    [email protected]

  • HOW DOES COMPRESSION OF AN INFOCUBE INCREASE THE LOAD PERFORMANCE?

    Hi all
    I see that Compression of infocube is one of the parameters to improve/increase load performance. If I am not wrong, Can some one please explain how compressing a cube improves the load performance?
    Thanks in advance
    Rishi

    Hi,
    I see that Compression of infocube is one of the parameters to improve/increase load performance. If I am not wrong, Can some one please explain how compressing a cube improves the load performance?
    As per my information Compression improves the Quaer performance not loading perforamnce
    when u do compression the same characterstics which are having the same values those records will be moved to E Fact table
    ex  Custno   Mat NO Qty   Value
          C101     M101     10      100
          c101     m101      20      200
    when u do comperession the recors will be compressed as below
    c101   m101  30 300
    when the query execution instead of reading two reocrds and compressing at the time of producing output at report level
    but when already compressed it fetchs one  record directly form E table like that the  query perforamnce will be improved.
    Not loading perforamnce.
    Thansk & regards,
    sathish

  • How to improve class-loading performance for missing classes

    Hi,
    do you have any ideas how we can improve the class-loading performance on a weblogic 12c running on jrockit? We spend about 20-30 ms per request in class-loading due to the way hibernate criteria API works (it tries to load quite a lot of missing classes). The only thing I came up with was to invert the class-loading using the weblogic descriptor for those missing classes (which are actually no real classes but some parts of a generated JPQL) which does improve performance a bit.
    Thanks
    Dimo

    Hi,
    do you have any ideas how we can improve the class-loading performance on a weblogic 12c running on jrockit? We spend about 20-30 ms per request in class-loading due to the way hibernate criteria API works (it tries to load quite a lot of missing classes). The only thing I came up with was to invert the class-loading using the weblogic descriptor for those missing classes (which are actually no real classes but some parts of a generated JPQL) which does improve performance a bit.
    Thanks
    Dimo

  • How to improve the load performance while using Datasources for the Invoice

    HI All,
    How to improve the  load performance while using Datasources for the Invoice . Actually my invoice load (Appx. 0.4 M records) is taking very long time nearly ~16 to 18 hrs  to update data from R/3 to 0ASA_DS01.
    If I load through flat file it will load with in ~20 Min for the same amount of data.
    Please suggest how to improve load performance.
    PS: I have done the Inpo package settings as per the OSS note.
    Regads
    Srininivasarao.Namburi.

    Hi Srinivas,
    Please refer to my blog posting [/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction|/people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction] which gives the details about the package size setting for extractors. I am sure that will be helpful in your case.
    Thanks,
    Divyesh
    Edited by: Divyesh Jain on Jul 20, 2010 8:47 PM

  • How do we improve master data load performance

    Hi Experts,
    Could  you please tell me how do we identify the master data load performance problem  and what can be done to improve the master data load performance .
    Thanks in Advance.
    Nitya

    Hi,
    -Alpha conversion is defined at infoobject level for objects with data type CHAR.
    A characteristic in SAP NetWeaver BI can use a conversion routine like the conversion routine called ALPHA. A conversion routine converts data that a user enters (in so called external format) to an internal format before it is stored on the data base.
    The most important conversion routine - due to its common use - is the ALPHA routine that converts purely numeric user input like '4711' into '004711' (assuming that the characteristic value is 6 characters long). If a value is not purely numeric like '4711A' it is left unchanged.
    We have found out that in customers systems there are quite often characteristics using a conversion routine like ALPHA that have values on the data base which are not in internal format, e.g. one might find '4711' instead of '004711' on the data base. It could even happen that there is also a value '04711', or ' 4711' (leading space).
    This possibly results in data inconsistencies, also for query selection; i.e. if you select '4711', this is converted into '004711', so '04711' won't be selected.
    -The check for referential integrity occurs for transaction data and master data if they are flexibly updated. You determine the valid InfoObject values.
    - SID genaration is must for loading transaction data with respect to master data, to cal master data at bex level.
    Regards,
    rvc

  • How to improve loading performance

    Hi,
       How to improve loading performanace

    Hi Prasanth,
    You have to take few measures to optimize load performance. Few are listed below.
    -> Consider the Packet sizing
    -> Delete index
    -> Table Partitioning
    -> Data Model
    -> Load Sequencing
    -> Parellel Processing
    Go through the link
    Business Intelligence Performance Tuning [original link is broken]
    http://help.sap.com/saphelp_nw2004s/helpdata/en/06/b5f8926ba22b45bc9eaa589f1c835b/frameset.htm
    Hope it helps you and suffice.
    Cheers
    SRS

  • How to Improve DSO loading performance

    Hello,
    I have a DSO having 3 infosources. This DSO is Generic means based on generic Data Sources. Daily we have a full upload (last 2 months data). Initially it was taking around 55 mins to load the data but now a days all are taking 2.5 Hrs daily.
    Can u please tell me how can i improve the performance in other word how can i reduce the time.
    Please give some solution or document to resolve this.
    amit

    Hi,
    Genearl tips you can try to improve the data load performance
    1. If they are full loads then try to see if you make them delta loads.
    2. Check if there are complex routines/transformations being performed in any layer. In that case see if you can optimize those codes with the help of an abaper.
    3. Ensure that you are following the standard procedures in the chain like deleting Indices/secondary Indices before loading etc.
    4. Check whether the system processes are free when this load is running
    5. Try making the load as parallel as possible if the load is happening serially. Remove PSA if not needed.
    6. Goto manage ODS -> activate -> activate in parallel -> increase the number of processes from there.for direct access try TCode RSODSO_SETTINGS
    7. Remove Bex Reporting check box in ODS if not required.
    Ensure the data packet sizing and also the number range buffering, PSA Partition size, upload sequence i.e, always load master data first, perform change run and then transaction data loads.
    Use InfoPackages with disjoint selection criteria to parallelize the data export.
    Complex database selections can be split to several less complex requests.
    Check this doc on BW data load perfomance optimization
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    BI Performance Tuning
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    SAP Business Intelligence Accelerator : A High - Performance Analytic Engine for SAP Ne tWeaver Business Intelligence
    http://www.sap.com/platform/netweaver/pdf/BWP_AR_IDC_BI_Accelerator.pdf
    BI Performance Audit
    http://www.xtivia.com/downloads/Xtivia_BIT_Performance%20Audit.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10564d5c-cf00-2a10-7b87-c94e38267742
    ODS Query Performance  
    Thanks,
    JituK

  • What is S.M.A.R.T. and how to read S.M.A.R.T. attributes

    Read the bottom for updated information/amendments
    Hi everyone,
    S.M.A.R.T. is Self-Monitoring, Analysis and Reporting Technology.
    [size=15]1. S.M.A.R.T. info websites[/size]
    OK, now we all know what S.M.A.R.T. stands for. For more details about S.M.A.R.T., here is a comprehensive non-technical explanation about S.M.A.R.T. by pcguide.com. Maxtor has published a white paper on S.M.A.R.T. too. And this is from Seagate. Anyhow, I am not going to discuss whether S.M.A.R.T will protect your harddisk from failure, etc etc here. Let us focus more on S.M.A.R.T. itself.
    [size=15]2. S.M.A.R.T. monitoring tools[/size]
    S.M.A.R.T. data is stored as a tabulated data somewhere in the harddisk as registers. The tools below support reading these registers value from S.M.A.R.T. enabled harddisk.
    1. SpeedFan *sign-up is required for download, but it's FREE
    2. Active SMART
    3. Sisoftware SANDRA 2002.6.8.97 SP1 , older version doesn't support
    [size=15]3. S.M.A.R.T. Tabulated Information[/size]
    Here is a screenshot captured with SpeedFan.
    Let us go through the screenshot. As can be seen, there are 5 columns - Attribute, Value, Worst, Warn and Raw.
    Attribute
    describes the meaning of the values. As mentioned above, the SMART data is stored as a tabulated data as registers. Each attribute represents one register ID. eg Raw Read Error Rate is register ID 01, register ID 03 is Spin-up Time, etc.
    Note that the register IDs are not displayed by SpeedFan, but are displayed by other SANDRA and Active SMART. I have no idea how many S.M.A.R.T. attributes are there, Active SMART stated it can detect more than 35 attributes!
    However, some attributes are manufacturer specific. So possibly some attribute names as shown in these SMART tools might not represent the true meaning of the values too!!
    Value
    is the current relative value of the attribute read from the registers.
    Worst
    is the worst value ever achieved.
    Warn
    is the critical threshold value of the attribute. If the Value has reached and OVER this threshold value, with very high probability the harddisk is in trouble. If SMART is enabled in the BIOS, SMART will alert user in the POST screen, with some manufacturer specific error code. You may need to refer to your manufacturer then.
    Note : Value, Worst and Warn are all relative values, such as percentage, not actual count. I have no idea how to calculate these values. This is the very S.M.A.R.T. algorithm, isn't it? ?(
    Raw
    is in fact, the most understandable represented value here! It represents the actual count of the attribute. SpeedFan displays this raw value as hexadecimal numbers. Such as Power On Hours Count is "AA8", which is 2728 in decimal numbers, meaning that the harddisk has been power on for 2728 hours!! There are some Raws represent average rate, such as "CRC Error Rate" etc.
    [size=15]4. S.M.A.R.T. Attributes In Detail[/size]
    Hopefully by now, you have the basic idea how to read the tabulated data. I will try my best to go through the attributes one-by-one, which after that shall make you understand more about S.M.A.R.T. and of course, starts to appreciate it.
    Raw Read Error Rate represents the condition of the physical disk, based on raw (physical) read errors, and physical surface defects.
    Spin Up Time is the time taken for the hdd to spin-up. more info
    Start/Stop Count is the number of counts (cycles) the hdd start/stop. more info
    Reallocated Sector Count - the number of sectors have been reallocated. Surface scan and found bad sector will increase this count. Now this drive has 7 bad sectors already!
    Seek Error Rate - how often the drive failed to locate the data (seeking)
    Power On Hours Count - the number of hours you have powered on the HDD.
    Spin Retry Count - how many time your drive need to attempt to get the drive platter spinning. If this value is more than 1, your drive is seriously in very bad condition!
    Calibration Retry Count - represents the number of time your drive perform calibration retry. I am not sure if low-level format would increase this value.
    Power Cycle CountThe number of time the hdd has been powered on. On and Off = 1 cycle.
    Read Soft Error Rate - should be Soft Read Error Rate. Similar to Raw Read Error Rate but this one depends on logical level, such as error occured in the hdd buffer, etc.
    Temperature - The drive temperature, Forget the relative values, read from the Raw value -- "2D" in this example, which is 45C. But my SpeedFan displayed 47C at that time!! SpeedFan seems to produce some +-2C error from actual reading once in a while. :P
    Hardware ECC Recovered is the number of counts ECC correction is performed on the data.
    Reallocated Event Count - similar to Reallocated Sector Count but this one is on the data.
    Current Pending Sector - is the number of counts how many sectors are currently pending. But what is a pending sector?? ?( ?(
    Offline Correctable - which is Off-line Scan Uncorrectable Sector Count. Again, what is off-line scan?
    UltraATA CRC Error Rate - which is in fact, CRC Error Count instead of rate. As shown, there have been "3C3", which is 883 errors occured already!!!
    There are more S.M.A.R.T. attributes, such as
    Thoroughput Performance - again, this relative value is surely got from some wierd algorithm again.
    Seek Time Performance - some algorithm has been used to calculate this performance value.
    Power Off Retract and Load Cycle Count are IBM HDD specific features -- unload the head off the platters when power off. More info on Head Load/unload cycle.
    [size=15]5. SpeedFan S.M.A.R.T. Fitness and Performance bars[/size]
    I should not comment on this too much because this is Almico's work. I am not sure how he calculate to define the "fitness" and "performance" though. It could probably based on mathematic relation between the current Values  and the threshold Warns.  
    For most hdds, like Maxtor and Seagate's, out-of-the-box the fitness bar has already reaching half-way 50%. But for IBM hdd, the fitness bar is always around 100%. This is because IBM hdd has its threshold/Warn values set to unrealistically high until it's quite unreachable even after a long term use. While for other brand HDDs, the threshold values are more realistic. Thus, the fitness bar in particular, does not tell the true fitness of the HDD. Take it as a reference only. Please always refer back to the Raw values to determine the fitness.
    [size=15]6. Conclusion : Judging HDD fitness by our own selves![/size]
    Now we all know what the Attributes are, so roughly everyone will have the basic idea how to judge HDD health by our own selves depend on the attributes you're reading. S.M.A.R.T. itself however, define "fitness" and "performance" based on its own algorithm.
    We can categorize the attributes into :
    Error Related Attributes
    "UltraATA CRC Error Rate", "Raw Read Error Rate", "Raw Soft Error Rate", "Hardware ECC Recovered Count", "Reallocated Sector Count", "Reallocated Event/Data Count", Offline Correctable"
    -- tell how often are those errors occured. For the above example, this Maxtor harddisk should be RMA-ed for its high CRC Error Rate count.
    Drive Fitness Attributes
    Spin Up Time, Start/Stop Count, Seek Error Rate, Power On Hours Count, Spin Retry Count, Calibration Retry Count, Power Cycle Count, Power Off Retract Count, Load Cycle Count
    -- check "spin retry count" and "Seek Error Rate", any value other than zero is really bad.
    Other Attributes
    I have no idea what they are for except Temperature.
    As the conclusion, understanding S.M.A.R.T. attributes helps in knowing your HDD fitness by your own self, rather than waiting for the S.M.A.R.T. to alert you for severe error. That might be too late already.  8o
    Thanks for reading.
    Edited:
    1. for better reading pleasure
    2. added Conclusion
    3. added explanations about SpeedFan SMART Fitness and Performance bar.

    Quote
    Originally posted by WarLord
    I like this HD tool. i use it everyday now. the tempture readings are great Hd temp cpu temp and even the system temp nice added feature to the monitoring. And this is an alternative to enabling the SMART in the bios? Thats the way im understanding it. is that right Maesus ?Because i have disabled in bios. She went threw alot of trouble here. thank you  
    Well from my observation, whether SMART is disabled or enabled in the BIOS, SMART is always working within the HDD itself.
    Basically SMART is acting like a blackbox, monitoring and tabulating HDD condition from time to time and its attributes only fully revealable by the manufacturers. SpeedFan's SMART status only displays partial information that is displayable. Some attributes are hidden, ~OR~ the attributes' locations are different from one HDD to another brand, such that some values don't correspond to the attribute meaning at all.
    It is very doubtful to claim that enabling SMART in the BIOS will hog down the performance. Just like a transport bus (yeah real bus that fetch passenger :P ), with or without the black box installed can't help it if the driver wants to speeding. :P

  • What are the better load/performance testing tools available for Flex Application with BlazeDS RO?

    In my application is designed with Flex3, ActionScript3, BlazeDS Remote Objects.
    Just i tried with OPENSTA but i cant do the dynamic parameterization in their generated scripts because the response of the calls is binary values and also we cant get the response using with SCL language.
    While testing with OPENSTA with HttpService, i can do the dynamic parameterization and got the response.
    can give the information about the below questions
    whether we can do dynamic parameterization with OPENSTA for Flex Remote objects?
    and  what are the better load/performance tools available for Flex Remote Objects?

    Your approach is fine, depending on how many and what type of CFCs you are talking about. If they are "singletons" - that is, only one instance of each CFC is needed to be in memory and can be reused/shared from multiple parts of your application - caching them in the application scope is common.  Just make sure they are thread safe ("var" or local.* all your method variables).
    You might consider taking advantage of a dependency injection framework, such as DI/1 (part of the FW/1 MVC framework), ColdSpring, or WireBox (a module of the ColdBox platform that can be used independently).  They have mechanisms for handling and caching singletons.  Then you wouldn't have to go to the application scope to get your CFC instances.
    -Carl V.

  • Socket BEA-000438 Unable to load performance pack.....

    We are very familiar with this message and in general know how to resolve this.
    We are seeing this on WLS 10.3.4 64 bit, on AIX 7.1, on Java 1.6 64 bit SR9_FP1.
    The weird thing is that if we switch to Java 1.6 SR9 (not FP1) things are fine. First level of Oracle support was flabbergasted that some one is running WLS 10.3.4 on AIX 7.1 !!!!!! Anyway, that confusion is all taken care of now. Technically, this is a supported version. But still no good answer, other than a suggestion to contact IBM (the Java vendor). That is going to be a mountain to climb.
    Anyone has any thoughts? Why would this version of Java find it difficult to load the .so files?
    Thanks

              Can you post more details ?
              Sergi
              Jiffy <[email protected]> wrote:
              >error:
              > <2004-3-12 %u4E0B%u534815%u65F648%u520654%u79D2 CST> <Error> <Socket>
              ><BEA-000438> <Unable to load performance pack. Using Java I/O instead.
              >Please ensure that wlntio.dll is in: 'D:D:/bea/weblogic81/server/bin'
              >>
              

  • Unable to load performance pack. Using Java I/O instead. Please ensure...

    Hi there.
    I am running WL 1033 development version using 64-bit 1.6.1 jdk (jdk-6u21-windows-x64) and the performance is terrible. I have upped my mem settings to this:
    set MEM_ARGS=-Xms1024m -Xmx1408m -XX:PermSize=1024m -XX:MaxPermSize=1024m
    and after a couple of deploys, I still get out of memory errors.
    I notice this message on startup: <BEA-000438> <Unable to load performance pack. Using Java I/O instead. Please ensure that wlntio.dll is in:
    Could this be part of my performace issues? If so, how do I fix it?
    I followed the install directions in the README.txt file from this zip file: wls1033_dev.zip
    Mike

    Hi Mike,
    If you are using 64 bit of Windows Operating System then Make sure that you add the following directory in the *"java.library.path"*....
    Directory: E:\bea1033\wlserver_10.3\server\native\win\64
    Example: In your servers start Script please add the following MEM_ARGS or in JAVA_OPTIONS:
    <b><font color=maroon>
    set MEM_ARGS= -Xms1024m -Xmx1408m -XX:PermSize=1024m -XX:MaxPermSize=1024m -Djava.library.path=E:\bea1033\wlserver_10.3\server\native\win\64</font></b>
    Thanks
    Jay SenSharma
    http://middlewaremagic.com/weblogic (Middleware Magic Is Here)

  • Increse No of BGP while data load and how to bypass the DTPin Process Chain

    Hello  All,
    We want to improve the performance of the loads. Currently we are loading the data from external Data Base though DB link. Just to mention we are on BI 7 system.  We are by passing the PSA to load the data quickest. Unfortunately we cannot use PSA.  Because loads times are more when we use PSA. So we are directly accessing views on external data base.  Also external data base is indexed as per our requirement.
    Currently our DTP is set to run on the 10 parallel processes (on DTP settings for batch Batch Manager with job class A). Even though we set to 10 we can see loads are running on 3 or 4 Back ground parallel processes only. Not sure why. Does any one know why it is behaving like that and how to increase them?
    If I want to split the load into three. (Diff DTPs with Different selections).  And all three will load the data into same info provider parallel. We have the routine in the selection that will look a table to get the respective selection conditions and all three DTPs will kick off parallel as part of the process chain.
    But in some cases we only get the data for two or oneDTPs(depends on the selection conditions). In this case is there any way in routine or process chain to say that if there is no selection for that DTP then ignore that DTP or set to success for that DTP and process chain should continue.
    Really appreciate your help.

    Hi
    Sounds like a nice problemu2026
    Here is a response to your questions:
    Before I start, I just want to mention that I do not understand how you are bypassing the PSA if you are using a DTP? Be that as it may, I will respond regardless.
    When looking at performance, you need to identify where your problem is.
    First, execute your view directly on the database. Ask the DBA if you do not have access. If possible perform a database explain on the view (this can also be done from within SAPu2026I think). This step is required to ensure that the view is not the cause of your performance problem. If it is, we need to implement steps to resolve that.
    If the view performs well, consider the following SAP BI ETL design changes:
    1. Are you loading deltas or full loads. When you have performance problems u2013 the first thing to consider is to make use of the delta queue (or changing the extraction to just send deltas to BI)
    2. Drop indexes before load and re-create them after the load 
    3. Make use of the BI 7.0 write optimized DSO. This allows for much faster loads.
    4. Check if you do ABAP lookups during the load. If you do, consider loading the DSO that you are selecting on in memory and change the lookup to refer to the table in memory rather. This will save tremendous time in terms of DB I/O
    5. This will have cost implications but the BI Accelerator will allow for much faster loads
    Good luck!

  • I hv iPhone with ios 6.0.2 My iphone could not able to check for update-it's just showing "checking for updates" with loading sign How can I upgrade to ios 7?  Help pls

    I hv iPhone with ios 6.0.2
    My iphone could not able to check for update…it’s just showing “checking for updates” with loading sign
    How can I upgrade to ios 7?
    Help pls

    I agree- I apologize if I sounded skeptical and attacking. In no way did I mean it to sound that way.
    I completely agree on the software-hardware combo issue- this is why apple killed off 1G and 2G devices- because newer software revisions would never ever run well if at all on such ancient devices. can you imagine running iOS 6 on an iPhone/iPod touch 1G? That would be terrible. Let alone iOS 3 runs terrible on them and now are generally rendered useless due to newer App Store requirements (most require 4.3+ now to allow apple to kill off the older devices). forcing newer software on older hardware (apple, dell, HP,... Anything electronic this definitely applies to) will almost always yield less-than-par results. The other part is apple forcing you to upgrade by means of intentially making things obsolete. all companies do that.
    All of this factual info aside, the issue at hand with the 5 is not a software-hardware combo problem- those are most widely seen with the major revisions. then again, this isn't the first time apple has gotten caught up in battery drain snafus (even iOS 5 was plagued with this as well- and now there are some people desperate to roll back but can't). it basically waters it down to the point where keeping the stock software will always yield the best results even though newer versions provide better features- it all depends on whether or not the person deems the added features are worth a performance hit.
    PS. I'm still at 100%- and I've been using it periodically throughout the day. your combo worked for you, but didn't for me. And may or may not work for others- As the tech world would say, mileage may vary.
    PSS. What model/carrier do you have? Just wondering.

  • Improve data load performance using ABAP code

    Hi all,
             I want to improve my load performance using ABAP code, how to do this?. If i writing ABAP code in SE38 how i can call
    in BW side? if give sample code to improve load performance it will be usefull. please guide me.

    There are several points that can improve performance of your ABAP code:
    1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
    2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
    3. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
    4.Avoid using nested SELECT and SELECT statements within LOOPs.
    5. Avoid using INTO CORRESPONDING FIELDS OF. Instead use INTO TABLE.
    6. Avoid using SELECT * and select only the required fields from the table.
    7. Avoid Executing a SELECT multiple times in the program.
    8. Avoid nested loops when working with large internal tables.
    9.Whenever using READ TABLE use BINARY SEARCH addition to speed up the search.
    10. Use FIELD-SYMBOLS instead of a work area when there are more than 200 entries in an internal table where some fields are being manipulated.
    11. Use MOVE with individual variable/field moves instead of MOVE-CORRESPONDING.
    12. Use CASE instead of IF/ENDIF whenever possible.
    13. Runtime transaction code se30 can be used to measure the application performance.
    14. Transaction code st05 can be used to analyse the SQL trace and measure the performance of the select statements of the program.
    15. Start routines can be used when transformation is needed in the data package level. Field/individual routines can be used for a simple formula or calculation. End routines are used when you wish to populate data not present in the source but present in the target.
    16. Always use a WHERE clause for DELETE statement. To delete records for multiple values, use SELECT-OPTIONS.
    17. Always use 'IS INITIAL' instead of equal to '' because null for a character is '' but '0' for an integer.
    Hope it helps.

Maybe you are looking for