Data retrieval performance

<p>Hi ,</p><p>I have attibute dimension in the outline and that is attached tosparse dimnesion "organization".</p><p>I have formulaes attched to the some members in "Time"and "Version" dimension. And they are tagged as dynamiccalc members.</p><p>If i retrieve data in excel sheet without using attributedimension then it is not taking time.</p><p>But if i use attribute dimension then its taking lot time forretrieval.</p><p> </p><p>Any ways for optimizing this problem.</p><p> </p><p>regards</p><p>Jami</p>

<p>As far as I know, retrieving using attribute dimensions alwaystakes a long time.  I don't know of any way to optimize theirretrieval time.  Anyone else?</p>

Similar Messages

  • Multi server data retrieval performance

    Hi experts,
    I have a question regarding the data retrieval performance (EVDRE) on a multi server installation environment on Microsoft SQL Server 2008.
    We have succesfully migrated Outlooksoft 4.2 SP03 to SAP BPC 7.0 SP07 for a customer. During this project we have also set up a complete new server environment consisting of:
    Development server: dedicated single server, Windows 2003 Standard SP2 32 bit, SQL Server 2008 SP1 with cumulative update package 6, SAP BPC 7.0 SP07, 2 quad core processors, 4 GB RAM
    QA server: dedicated multi servers - 1 database server (SQL/OLAP), Windows 2003 Standard SP2 64 bit, SQL Server 2008 SP1 with cumulative update package 6, 2 quad core processors, 32 GB RAM - 1 dedicated application/web server, Windows 2003 Standard SP2 32 bit, SQL Server 2008 SP1 with cumulative update package 6 (shared components / reporting services), SAP BPC 7.0 SP07, 2 quad core processors, 4 GB RAM
    Production server: dedicated multi servers - 1 database server (SQL/OLAP/Reporting services), Windows 2003 Standard SP2 64 bit, SQL Server 2008 SP1 with cumulative update package 6, 2 quad core processors, 32 GB RAM - 2 dedicated application/web server, Windows 2003 Standard SP2 32 bit, SQL Server 2008 SP1 with cumulative update package 6 (shared components), SAP BPC 7.0 SP07, 2 quad core processors, 4 GB RAM
    Furthermore, two terminal servers with the SAP BPC client.
    All servers have good performancve and we have great times on cube processing and SQL processing. However, to our great surprise we find that the single development server is much faster with a single user to retrieve data using EVDRE than the multi-server environment. About 2x as fast. A reporting book with more then 10 sheets and about 25 EVDRE's takes about 42 seconds on the development server and 93 seconds on the multi server.
    It seems that EVDRE is taking up a lot of time to communicate between the application server and the database server in a multi server environment while being much faster on a single server. This is not what we want :-). The network speed in the domain consist of all 1 GB lines so that should not be the issue.
    Do you have any experience with this? How can we upgrade the speed of the multi server, are there specific settings?
    Hope the get some useful answers. Thanks in advance.
    Damien
    Edited by: DWiegman on Feb 20, 2010 4:15 PM

    Hi,
       You have to activate also the EVDRE logs on the client and server level, just to understand from where is coming the problem (appserver-db comuncication or client-appserver communication). You have to check also if there is any proxy firewall between client and application server.
        In case you are using NLB, please verify if afinity is setup to true.
        The performance problems can be come from db level. Did you verify how many records do you have into WB table for the specific application? Are you keeping the DB in full mode? How big is the log of the databse?
        The are a lot of things that can have impact of this, but it looks to be a setup problem.
    Hope this can help you,
    Mihaela

  • Serious Data retrieval performance problems

    I'm an experrience .NET developer, but new to Oracle. I'm running into an issue with performance that I can't seem to figure out. I've created tables in SQL Server and Oracle and am running against both for testing purposes only. I've got two tables I'm selecting from, both hold nearly identical values. Table names are SIG_INFO and ApprovedScoutRule.
    I'm running the following code to select all records from each table. Note that I'm simply selecting * from both tables, no ordering, filtering, etc...
    The schema for the ApprovedScoutRule table is as follows
    "SID" NUMBER NOT NULL,
    "RULETEXT" VARCHAR2(2000 BYTE) NOT NULL,
    "DEFAULTPRIORITY" NUMBER NOT NULL,
    "ISSORULE" CHAR(1 BYTE) DEFAULT 0 NOT NULL,
    "ENABLED" CHAR(1 BYTE) DEFAULT 1 NOT NULL)
            public static int GetAllRecordsTest(string tableName) {
                var retVal = 0;
                var connString = DatabaseUtility.GetOracleConnectionString();
                using (var conn = new OracleConnection(connString)) {
                    conn.Open();
                    var cmd = new OracleCommand {Connection = conn, CommandType = CommandType.Text, CommandText = "SELECT * FROM " + tableName};
                    var reader = cmd.ExecuteReader();
                    while(reader.Read()) {
                        retVal++;
                return retVal;
            }When I run the data fetch for table SIG_INFO, I get 20,744 records and it takes on average 31 seconds to return and iterate through the datareader.
    When pulling from the table ApprovedScoutRule I get 9,588 records (less than half) and yet the average time for that execution takes 43 seconds.
    I can run these same tests against a SQL Server database and it takes less than a second for each one of these calls. However, my SQL Server is local and my Oracle db is hosted, so there is the lag associated with the remote internet call.
    So, is this performance good? Should I expect 10k rows of data to take over 40 seconds to return?
    I'm using VS .NET 2010, .NET Framwork 4.0, Oracle.DataAccess Runtime Version 4.0.30319 (version 4.112.1.1)

    Apparently this was simply a network latency problem. When I executed the performance test on a machine on the same local network where the Oracle database is hosted, times were under 1 second each. Marking this as resolved.

  • Data Retrieval Speed in Oracle Spatial vs. ESRI ArcSDE

    I would appreciate any opinions regarding data retrieval
    performance between Oracle Spatial and ESRI ArcSDE. Would an end-
    user (using ESRI software) experience significant differences in
    data retrieval speed depending on how the data were stored in
    Oracle (MDSYS.SDO_GEOMETRY verses ESRI Binary/Blob formats).
    Knowing that the ESRI binary formats are tailored to their
    software front-end apps (ArcGIS, ArcMap, ArcCatalog, and
    ArcInfo), wouldn't this be a "non-issue" until the spatial
    dataset gets "large", and even then, wouldn't performance be
    (almost) equal if the spatial indexes were created properly?
    Thanks for your inputs,
    Bruce

    John,
    You can't do that type of query in sql from sql*plus using
    SDEBINARY. HOwever, you can perform spatial queries in ArcMap
    if you are using SDEBINARY.
    You can use the query builder to perform point-in-polygon type
    queries.
    Hope that helps.
    For my two cents, I think SDO_GEOMETRY gives you a more robust
    database to work with, because you have the added power of
    Oracle Spatial functions. If you are using SDEBINARY you are
    limited to only what you can do thru ArcGIS.
    If you are concerned more about performance than accessibility,
    especially with a large number of users, then SDEBINARY might
    be the better choice.
    I love Oracle Spatial and am hoping that the performance issue
    will not be a serious one when we start putting ArcIMS developed
    apps into production.
    Dave

  • Data retrieval buffers - buffer size and sort buffer size

    Any difference to tune BSO and ASO on data retrieval buffers?
    From Oracle documentation, the buffer size setting is per database per Essbase user i.e. more physical memory will be used if there are lots of concurrent data access from users.
    However even for 100 concurrent users, default buffer size of 10KB (BSO) or 20KB (ASO) seems very small compare to other cache setting (total buffer cache is 100*20 = 2MB). Should we increase the value to 1000KB to improve the data retrieval performance by users? The improvement impact is the same for online application e.g. Hyperion Planning and reporting application e.g. Financial Reporting?
    Assume 3 Essbase plan types with 100 concurrent access:
    PLAN1 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN2 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN3 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    Total physical memory required is 600MB.
    Thanks in advance!

    256 samples Buffer size will always give you a noticable amount of latency. If you use Software Monitoring you should try setting your Buffer to 64 samples. With the recording delay slider in the preferences->Audio you can compensate for the latency (of course not in realtime) so that the Audio will be placed exactly there, where it should hae been recorded at. In your case set it to a -value. A loopback test (check the link below) will clarify the exact amount of Latency occuring on your system.
    http://discussions.apple.com/thread.jspa?threadID=1883662&tstart=0

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • WAD : Result set is too large; data retrieval restricted by configuration

    Hi All,
    When trying to execute the web template by giving less restiction we are getting the below error :
    Result set is too large; data retrieval restricted by configuration
    Result set too large (758992 cells); data retrieval restricted by configuration (maximum = 500000 cells)
    But when we try to increase the number of restictions it is giving output. For example if we give fiscal period, company code ann Brand we are able to get output. But if we give fical period alone it it throwing the above error.
    Note : We are in SP18.
    Whether do we need to change some setting in configuration? If we yes where do we need to change or what else we need to do to remove this error
    Regards
    Karthik

    Hi Karthik,
    the standard setting for web templates is to display a maximum amount of 50.000 cells. The less you restrict your query the more data will be displayed in the report. If you want to display more than 50.000 cells the template will not be executed correctly.
    In general it is advisable to restrict the query as much as possible. The more data you display the worse your performance will be. If you have to display more data and you execute the query from query designer or if you use the standard template you can individually set the maximum amount of cells. This is described over  [here|Re: Bex Web 7.0 cells overflow].
    However I do not know if (and how) you can set the maximum amount of cells differently as a default setting for your template. This should be possible somehow I think, if you find a solution for this please let us know.
    Brgds,
    Marcel

  • Report Developed in Webi Rich Client Consuming more time in Data Retrieval

    Dear All,
    I am a BO Consultant, recently in my project I have developed one report in Webi Rich Client., at the time of development and subsequent days the report was working fine (taking Data Retrieval time less than 1 minute), but after some days its taking much time (increasing day by day and now its taking more than 11 minutes).
    Can anybody point out what could be the reason?????
    We are using,
    1. SAP BI 7.0
    2. SAP BO XI 3.1 Edge
    3. Webi Rich Client Version :12.3.0 and Build 601
    This report is made on a Multiprovider (Sales).
    What are the important points that should be considered so that we can improve the performance of Webi Reports????
    Waiting for a suitable solution.....................
    Regards,
    Arun Krishnan.G
    SAP BO Consultant
    Edited by: ArunKG on Oct 11, 2011 3:50 PM

    Hi,
    Please come back here with a copy/paste of the 2 MDX statements from the MDA.log to compare the good/bad runtimes.
    & the 2 equivalent DPCOMMANDS clauses (good and bad) from the WebI trace logs.
    Can u explain what u really mean in the bold text above..................Actually I didn't get you..........
    Pardon, I have only 3 months experience in BO.
    Regards,
    Arun
    Edited by: ArunKG on Oct 11, 2011 4:28 PM

  • Retrieval performance become poor with dynamic calc members with formulas

    We are facing the retrieval performance issue on our partititon cube.
    It was fine before applying the member formulas for 4 of measures and made them dynamic calc.
    The retrieval time has increased from 1sec to 5 sec.
    Here is the main formula on a member, and all these members are dynamic calc (having member formula)
    IF (@ISCHILD ("YTD"))
    IF (@ISMBR("JAN_YTD") AND @ISMBR ("Normalised"))
    "Run Rate" =
    (@AVG(SKIPNONE, @LIST (@CURRMBR ("Year")->"JAN_MTD",
    @RANGE (@SHIFT(@CURRMBR ("Year"),-1, @LEVMBRS ("Year", 0)), @LIST("NOV_MTD","DEC_MTD")))) *
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period")))) + "04";
    ELSE
    IF (@ISMBR("FEB_YTD") AND @ISMBR ("Normalised"))
    "Run Rate" =
    (@AVG (SKIPNONE, @RANGE (@SHIFT(@CURRMBR ("Year"),-1, @LEVMBRS ("Year", 0)),"DEC_MTD"),
    @RANGE (@CURRMBR ("Year"), @LIST ("JAN_MTD", "FEB_MTD"))) *
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period")))) + "04";
    ELSE
    "Run Rate"
    =(@AVGRANGE(SKIPNONE,"Normalised Amount",@CURRMBRRANGE("Period",LEV,0,-14,-12))*
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period"))))
    + "Normalised"->"04";
    ENDIF;
    ENDIF;
    ELSE 0;
    ENDIF
    Period is dense
    Year is dense
    Measures (normalised) is dense
    remaining all sparse
    block size 112k
    index cache to 10mb
    Rertrieval buffer 70kb
    dynamiccalccahe max set to 200mb
    Please not that, this is partition cube, retriving data from 2 ASO, 1 BSO underline cubes.

    I received the following from Hyperion. I had the customer add the following line to their essbase.cfg file and it increased their performance of Analyzer retrieval from 30 seconds to 0.4 seconds. CalcReuseDynCalcBlocks FALSE This is an undocumented setting (will be documented in Essbase v6.2.3). Here is a brief explanation of this setting from development: This setting is used to turn off a method of reusing dynamically calculated values during retrievals. The method is turned on by default and can speed up retrievals when it involves a large number of dynamically calculated blocks that are each required to compute several other blocks. This may happen when there is a big hierarchy of sparse dynamic calc members. However, a large dynamic calculator cache size or a large value of CALCLOCKBLOCK may adversely affect the retrieval performance when this method is used. In such cases, the method should be turned off by setting CalcReuseDynCalcBlocks to FALSE in the essbase.cfg file. Only retrievals are affected by this setting.

  • Improving retrieval performance of essbase server in unix environment

    HI,
    Our production environment is unix system. can any one suggest settings which impact the retrieval performance and how to do these settings in unix environment.

    Naveen,
    For retrieval perfomance, Increase the retreival buffer size.
    Default is 10 KB for 32 bit platforms and 20 KB for 64 bit.
    make it 100 KB.
    2. if the data block size is large ,and you are retriving cells across several blocks
    set VLBREPORT true in the essbase.cfg configuration file
    NOTE: this will increase the retrival process but , its applicable to the outlines which does not include dynamic calcs.
    3. If the format of your reoport is of not much importance. group dense dimension in colums and groping sparse in rows ,this would be faster.
    4. An applicaion/database does has a limt on its memory consumption.
    So, RAM is the key for the speed.
    Best part is that ,as you have UNIX operating system ,addressabe memory in your case is 3.9GB(which is very good) unlike 2 GB in case of windows.This is per application.
    Sandeep Reddy Enti
    HCC

  • Slow data retrieval on 8i

    I am using oracle 8i database on Windows 2000 server on compaq
    proliant 350 server machine. The problem is that sometimes
    connecting from a client is very slow and after connection data
    retrieval is also slow. I am usring TCP/IP and net8. I came to
    know from internet that there is a patch which is actually a
    work around to solve this problem. Can anybody help me to locate
    this patch?
    Thanks in advance.
    G. Rajan.

    You can probably prove that this is the issue by creating a retrieve using the excel addin or smart view to replicate the form and see how long it takes to retrieve.
    You will also see in the essbase app logs how long it is taking to perform the retrieve.
    Cheers
    John
    http://john-goodwin.blgspot.com/

  • Run time error data retrieval

    hi all
    am creating a normal alv reprt using
    <b>perform data_retrieval.
    perform build_fieldcatalog.
    perform build_layout.
    perform display_alv_report.</b>
    but am getting a error in the form data retrieval the form is as below
    <b>form data_retrieval.
    select FBUDA VBELN WERKS LGORT NETWR
      UP TO 10 ROWS
      from vbrp
      into table IT_VBRP.
    endform.                    " DATA_RETRIEVAL</b>
    please help me out from this.
    with regards
    vijay

    here the order of fields u r selecting is different from the strucure of the table...
    try this ...
    form data_retrieval.
    select FBUDA VBELN WERKS LGORT NETWR
    UP TO 10 ROWS
    from vbrp
    into <b>corresponding fields of </b>table IT_VBRP.
    endform. " DATA_RETRIEVAL
    reward if it helps u..
    sai ramesh

  • BPC 10 - EPM data retrieval very slow!

    Hi BPCers,
    We are using an Excel EPM Input Schedules as a Resource Management tool - using VBA to provide the functionality we need.
    Performance is generally good, but quickly deteriorates when handling larger data sets - even 500-600 rows of transactional data is enough to slow data retrieval from our BPC Cube to EPM to an unusable speed. This is in relative terms pretty small so  there should be some option for optimisation.
    Does anybody have any experience with this? All suggestions welcome. We are operating on EPM Service Pack 7 Patch 1, but I'm not sure that EPM is necessarily the problem here.
    Thanks,
    Tom

    Thanks Gersh,
    Had a look through fiddler and have identified the job that is causing the delay - some rooting around in the ABAP debugger produced the answer as to why adding more data slows processing speed so dramatically.
    When we take data from the back end, we select a couple of parameters which limit the range of data that we are pulling through - a certain set of people, and a certain range of days. Once this is pulled through, allocations are made to any combination of person and day within this range, which generates and extra two properties - a project ID and a work status.
    This makes 4 properties, and when BPC pulls data it attempts to find every combination of every one of the properties that exist within this range - so the more allocations made the more this slows down as it dramatically increases the number of combinations.
    The result is that BPC runs through a couple of hundred thousand generated tables, most of which are nonsense.
    Not sure what to do from here. This is how BPC reads data so approaching a fix could be difficult.
    Tom

  • Improve data load performance using ABAP code

    Hi all,
             I want to improve my load performance using ABAP code, how to do this?. If i writing ABAP code in SE38 how i can call
    in BW side? if give sample code to improve load performance it will be usefull. please guide me.

    There are several points that can improve performance of your ABAP code:
    1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
    2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
    3. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
    4.Avoid using nested SELECT and SELECT statements within LOOPs.
    5. Avoid using INTO CORRESPONDING FIELDS OF. Instead use INTO TABLE.
    6. Avoid using SELECT * and select only the required fields from the table.
    7. Avoid Executing a SELECT multiple times in the program.
    8. Avoid nested loops when working with large internal tables.
    9.Whenever using READ TABLE use BINARY SEARCH addition to speed up the search.
    10. Use FIELD-SYMBOLS instead of a work area when there are more than 200 entries in an internal table where some fields are being manipulated.
    11. Use MOVE with individual variable/field moves instead of MOVE-CORRESPONDING.
    12. Use CASE instead of IF/ENDIF whenever possible.
    13. Runtime transaction code se30 can be used to measure the application performance.
    14. Transaction code st05 can be used to analyse the SQL trace and measure the performance of the select statements of the program.
    15. Start routines can be used when transformation is needed in the data package level. Field/individual routines can be used for a simple formula or calculation. End routines are used when you wish to populate data not present in the source but present in the target.
    16. Always use a WHERE clause for DELETE statement. To delete records for multiple values, use SELECT-OPTIONS.
    17. Always use 'IS INITIAL' instead of equal to '' because null for a character is '' but '0' for an integer.
    Hope it helps.

  • Query Error Information: Result set is too large; data retrieval ......

    Hi Experts,
    I got one problem with my query information. when Im executing my report and drill my info in my navigation panel, Instead of a table with values the message "Result set is too large; data retrieval restricted by configuration" appears. I already applied "Note 1127156 - Safety belt: Result set is too large". I imported Support Package 13 for SAP NetWeaver 7. 0 BI Java (BIIBC13_0.SCA / BIBASES13_0.SCA / BIWEBAPP13_0.SCA) and executed the program SAP_RSADMIN_MAINTAIN (in transaction SE38), with the object and the value like Note 1127156 says... but the problem still appears....
    what Should I be missing ??????  How can I fix this issue ????
    Thank you very much for helping me out..... (Any help would be rewarded)
    David Corté

    You may ask your basis guy to increase ESM buffer (rsdb/esm/buffersize_kb). Did you check the systems memory?
    Did you try to check the error dump using ST22 - Runtime error analysis?
    Edited by: ashok saha on Feb 27, 2008 10:27 PM

Maybe you are looking for

  • Can I use a wireless speaker with my mac?

    I really hate all the wires running behind my computer, so it would be awesome if I can buy a wireless or bluetooth speaker for my Mac. I'm just wondering that is it possible to do that? If possible, please give me some hints on how to. Thanks so muc

  • Mass output

    Is there a way to send to a specific output a number of sales orders at once, to get printed?

  • Reg : Simulation in FB01

    Hi, We have Custom program to post the document in FB01 using BDC.User asked us to show the data in ALV before posting the document like simulation in FB01. We have documents in the internal table before passing the data into BDC program.? Is it poss

  • Compress command, can I change the destination for the ".zip" file?

    I cannot find a "User's Manual" for my OS 10.5.8. Hence my question. I have used up all but 1% of my hard disc space. Now I wish to compress and backup my iPhoto files (44Gb) specifying (?sp) my backup drive as the destination for the compressed file

  • Combining sql with plsql

    I think I need to use plsql. I have 2 queries select distinct msg_name from msg; select distinct situation_name from situation; I want to add another column to indicate if they match, meaning msg_nme = situation_name. If they match 'Yes'. If they do