Performance Issue when setting connection information

I am writing a Winforms application using VB.NET
I have developed a method that sets the Crystal Reports Connection.
This method first grabs the connection string from the config file creates a Crystal Reports ConnectionInfo object.
The following code then takes over 5 seconds to run:
Dim myTables As Tables = report.Database.Tables
Dim myTableLogonInfo As TableLogOnInfo = New TableLogOnInfo()
myTableLogonInfo.ConnectionInfo = myConnectionInfo
Then this code takes over 6 seconds to run:
For Each myTable As CrystalDecisions.CrystalReports.Engine.Table In myTables
myTable.ApplyLogOnInfo(myTableLogonInfo)
    myTable.LogOnInfo.ConnectionInfo.DatabaseName = myTableLogonInfo.ConnectionInfo.DatabaseName
    myTable.LogOnInfo.ConnectionInfo.ServerName = myTableLogonInfo.ConnectionInfo.ServerName
    myTable.LogOnInfo.ConnectionInfo.UserID = myTableLogonInfo.ConnectionInfo.UserID
    myTable.LogOnInfo.ConnectionInfo.Password = myTableLogonInfo.ConnectionInfo.Password
Next
This only occurs the first time that the form is loaded, the subsequent times it is
335ms (as compared to 5349ms) and 52ms (as compared to 6228ms)
However, when the application is reloaded the same slow times re-occur.
There are not many different tables in my report generally 3 or less. Only 1 table in this case.
This is currently in test and VS2008 and SQLServer2005 are both running locally. The same issue does occur in the QA environment as well, where the application is run on the client and the database is on a server on the same LAN.
So my question is, can I improve the speed of this portion of code? Why does it take so long to set the report connection information? Am I doing connections to the report incorrectly?
Any ideas?
Thanks,

This is an expected behavior. First time the code is run, the app loads all the CR runtime; assemblies, dlls, COM dlls, etc., etc. Once these are in memory, the performance improves significantly. For more details and possible work-arounds, see [this|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/8029cc96-6ff3-2b10-47a2-b30ea790ea5b] article.
Ludek

Similar Messages

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • Performance issue when a Direct I/O option is selected

    Hello Experts,
    One of my customers has a performance issue when a Direct I/O option is selected. Reports that there was increase in memory usage when Direct I/O storage option is selected when compared to Buffered I/O option.
    There are two applications on the server of type BSO. When using Buffered I/O, experienced a high level of Read and Write I/O's. Using Direct I/O reduces the Read and Write I/O's, but dramatically increases memory usage.
    Other Information -
    a) Environment Details
    HSS - 9.3.1.0.45, AAS - 9.3.1.0.0.135, Essbase - 9.3.1.2.00 (64-bit)
    OS: Microsoft Windows x64 (64-bit) 2003 R2
    b) What is the memory usage when Buffered I/O and Direct I/O is used? How about running calculations, database restructures, and database queries? Do these processes take much time for execution?
    Application 1: Buffered 700MB, Direct 5GB
    Application 2: Buffered 600MB to 1.5GB, Direct 2GB
    Calculation times may increase from 15 minutes to 4 hours. Same with restructure.
    c) What is the current Database Data cache; Data file cache and Index cache values?
    Application 1: Buffered (Index 80MB, Data 400MB), Direct (Index 120MB; Data File 4GB, Data 480MB).
    Application 2: Buffered (Index 100MB, Data 300MB), Direct (Index 700MB, Data File 1.5GB, Data 300MB)
    d) What is the total size of the ess0000x.pag files and ess0000x.ind files?
    Application 1: Page File 20GB, Index 1.7GB.
    Application 2: Page 3GB, index 700MB.
    Any suggestions on how to improve the performance when Direct I/O is selected? Any performance documents relating to above scenario would be of great help.
    Thanks in advance.
    Regards,
    Sudhir

    Sudhir,
    Do you work at a help desk or are you a consultant? you ask such a varied range of questions I think the former. If you do work at a help desk, don't you have a next level support that could help you? If you are a consultant, I suggest getting together with another consultant that actually knows more. You might also want to close some of your questions,. You have 24 open and perhaps give points to those that helped you.

  • Performance Issues when editing large PDFs

    We are using Adobe 9 and X Professional and are experiencing performance issues when attempting to edit large PDF files.  (Windows 7 OS). When editing PDFs that are 200+ pages, we are seeing pregnated pauses (that feel like lockups), slow open times and slow to print issues. 
    Are there any tips or tricks with regard to working with these large documents that would improve performance?

    You said "edit." If you are talking about actual editing, that should be done in the original and a new PDF created. Acrobat is not a very good editing tool and should only be used for minor, critical edits.
    If you are talking about simply using the PDF, a lot depends on the structure of the PDF. If it is full of graphics, it will be slow. You can improve this performance by using the PDF Optimize to reduce graphic resolution and such. You may very likely have a bloated PDF that is causing the problem and optimizing the structure should help.
    Be sure to work on a copy.

  • Oracle Retail 13 - Performance issues when open, save, approving worksheets

    Hi Guys,
    Recently we started facing performance issues when we started working with Oracle Retail 13 worksheets from within the java GUI at clients desktops.
    We run Oracle Retail 13.1 powered by Oracle Database 11g R1 and AS 10g in latest release.
    Issues:
    - Opening, saving, approving worksheets with approx 9 thousands of items takes up to 15 minutes.
    - Time for smaller worksheets is also around 10 minutes just to open a worksheet
    - Also just to open multiple worksheets takes "ages" up to 10-15 minuts
    Questions:
    - Is it expected performance for such worksheets?
    - What is your experience with Oracle Retail 13 in terms of performance while working with worksheets - how much time does it normally take to open edit save a worksheet?
    - What are the average expected times for such operations?
    Any feedback and hints would be much appreciated.
    Cheers!!

    Hi,
    I guess you mean Order/Buyer worksheets?
    This is not normal, should be quicker, matter of seconds to at most a minute.
    Database side tuning is where I would look for clues.
    And the obvious question: remember any changes to anything that may have caused the issue? Are the table and index statistics freshly gathered?
    Best regards, Erik Ykema

  • IOS - Performance issues when touching screen

    Hello,
    I am having perfomance issues when the use keep moving one finger on the screen.
    I am testing with :
    - flash builder 4.7 beta 2
    - air 3.5 (beta too I guess)
    - iPhone 3GS running on iOS 6.0.1 (also tested on an iPad3, and even though it's less horrible, the impact is still huge compared to what you could expect).
    - gpu mode
    - always published as an addhoc release build (best)
    Test 1-A) :
    - Have a completely empty project, add a fps counter (with mouseEnabled and children false).
    - run a while each frame so you force the fps under 60 (if not you won't see it going down from let's say theoric 100 to theoric 70 since fps max is 60).
    like this for example :
                                            protected function framingMouseTest(e : Event) : void {
                           var t : int = getTimer();
                           while (getTimer()-t<30) {}//this will keep flash busy
    Notice that when you keep your finger on the screen and move it you loose approximately 7ms per frame. (for example from 26 to 22fps or from 48 to 36 fps, depending on the time waited in your while).
    Test 1-B) do the same with even stage mouseEnabled false (and you could even set it + mouseChildren off recursively from stage) : same result (-7ms). Why ? Since it is all off it shouldn't take more than 1 ms no ?
    Test 1-C) add Multitouch.inputMode = MultitouchInputMode.TOUCH_POINT; Same result (found some threads saying it would help but not that much...)
    Test 1-D) Multitouch.mapTouchToMouse = false;          It is a bit better but still around 5ms (didnt write down this one)
    Test 1-D) Multitouch.mapTouchToMouse = false; Multitouch.inputMode = MultitouchInputMode.NONE;          It is a bit better but still around 4ms (didnt write down this one) And you have now nothing left to remove I think.
    Now the horrible part :
    Test 2-A)
    Add some clips and subclips to your scene, containing bitmaps, results may vary depending on how much (let's say 30, I don't have the exact count on subcontainers), etc. but what's important is the difference between no finger and 1 finger moving on screen.
    On normal conditions (keep some mouseEnabled for the clips you need interaction add those a listener, lets say 10 total, default values for the rest) you can get about 40ms per frame down ! (for example from 36 to 14 fps) just by touching the screen.
    Test 2-B) Recursively set everything to mouseEnabled and children false after everything is added, you still get about 30ms down because of the finger, whereas it shouldn't test above the main container which already has mouseChildren false.
    Test 2-C) Multitouch.mapTouchToMouse = false; Multitouch.inputMode = MultitouchInputMode.NONE; you still get about 20ms down !!! whereas you're in a mode when touch and mouseEvents shouldn't even exist.
    I am really confused. My conclusions are :
    Performance are really impacted whatever you do when a user touch the screen + it will get worse the more clips/depth you have. Even if everything is made to disable this.
    I think anyone could test this with pretty much any app done with AIR on iOS, as long as the framerate is already <max (<60 for example), touch and move your finger and tell me what happens regarding framerate.
    What I would love :
    1) tell me what I am missing if you think this is due to some mistake on my part, if not :
    2) better perfomance when containers have mouseChildren false. It seems obvious it still takes longer to process.
    3) a mode where the touch/mouse is REALLY disabled including stage etc. It is quite obvious air does takes longer to handle touch as the scene gets more complex, even if you disable it all (seems to work only 20%...)
    4) a mode where the touch/mouse is REALLY disabled, on anything but the stage. Because you might want to handle everythin just from stage coordinates, if your project allows you to, like mine. Unfortunately right now it wouldn't give you enough performance boost (well, a bit but still maybe +20ms).
    PS : you don't even need to add mouselistener or touchlisteners (tested both), but if you do, it is worse.
    Compared to this, I am finding rendering performance pretty decent - even though it's tough for a game on a 3GS- as long as you do it properly.

    Could you open a bug report on this at bugbase.adobe.com?  Please include any sample code or applications so we can quickly reproduce the problem.  If you'd like to keep this private, feel free to send them to me directly ([email protected]).  Also, please note in the bug if this is new behavior with the latest AIR sdk or if you've seen this in past versions too.  Once added, let me know the bug number and I'll follow up internally.  I've alerted the iOS team to expect this bug.
    Thanks,
    Chris

  • How to Improve performance issue when we are using BRM LDB

    HI All,
    I am facing a performanc eissue when i am retriving the data from BKPF and respective BSEG table....I see that for fiscal period there are around 60lakhs records. and to populate the data value from the table to final internal table its taking so much of time.
    when i tried to make use of the BRM LDB with the SAP Query/Quickviewer, its the same issue.
    Please suggest me how to improve the performance issue.
    Thanks in advance
    Chakradhar

    Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting - post locked
    Rob

  • Performance issue when using select count on large tables

    Hello Experts,
    I have a requirement where i need to get count of data  from a database table.Later on i need to display the count in ALV format.
    As per my requirement, I have to use this select count inside a nested loops.
    Below is the count snippet:
    LOOP at systems assigning <fs_sc_systems>.
    LOOP at date assigning <fs_sc_date>.
    SELECT COUNT( DISTINCT crmd_orderadm_i~header )
       FROM crmd_orderadm_i
       INNER JOIN bbp_pdigp
           ON crmd_orderadm_iclient EQ bbp_pdigpclient               "MANDT is referred as client
         AND crmd_orderadm_iguid  EQ bbp_pdigpguid
         INTO w_sc_count
    WHERE crmd_orderadm_i~created_at BETWEEN <fs_sc_date>-start_timestamp
         AND <fs_sc_date>-end_timestamp
         AND bbp_pdigp~zz_scsys   EQ <fs_sc_systems>-sys_name.
    endloop.
    endloop.
    In the above code snippet,
    <fs_sc_systems>-sys_name is having the system name,
    <fs_sc_date>-start_timestamp is having the start date of month
    and <fs_sc_date>-end_timestamp is the end date of month.
    Also the data in tables crmd_orderadm_i and bbp_pdigp is very large and it increases every day.
    Now,the above select query is taking a lot of time to give the count due to which i am facing performance issues.
    Can any one pls help me out to optimize this code.
    Thanks,
    Suman

    Hi Choudhary Suman ,
    Try this:
    SELECT crmd_orderadm_i~header
      INTO it_header                 " interna table
      FROM crmd_orderadm_i
    INNER JOIN bbp_pdigp
        ON crmd_orderadm_iclient EQ bbp_pdigpclient
       AND crmd_orderadm_iguid   EQ bbp_pdigpguid
       FOR ALL ENTRIES IN date
    WHERE crmd_orderadm_i~created_at BETWEEN date-start_timestamp
                                          AND date-end_timestamp
       AND bbp_pdigp~zz_scsys EQ date-sys_name.
        SORT it_header BY header.
        DELETE ADJACENT DUPLICATES FROM it_header
        COMPARING header.
        describe table it_header lines v_lines.
    Hope this information is help to you.
    Regards,
    José

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • Having issues when re-connecting back to a session.

    When i am log-on to the MPS2012 through the thin client and let say there was a need to re-connect because of power issue or the network issue for that particular client was disconnected; So when the user connect back to it, it bounces back and fails for
    re-login, after re-logging again, it still bounce back. And other client connect to the particular VM are still connected and running.
    And also in the morning, when i connect, it fails at times and when i ping that particular VM, it times out"Request time out", until i restart it before it start replying and then the client systems will now be able to connect.
    Its giving me concern because they are complaining and also its a school environment. I need resolve this so that even if i am not there they will be able to connect easily and re-connect easily if there was a disconnection from any thin client.
    Please i will really appreciate if anyone can assist me resolve any of this issue or what the cause/problem is.
    Thanks

    When I entered the serial number, I included the - between digits...could that be the problem?

  • 11.2.0.0 upgrade- performance issue when commits

    We have upgraded our data warehouse system from 10.2.0.4 for 11.2.0.2. After upgrade ETL system has performance issue. Extraction is fine because we compared the explain plans before and after upgrade. However, loading is taking very long. In loading there is a commit after every 1000 records, that why its slow. Even when we import data with commit=y option its very slow. Is there any know issue on this?
    Any help pl?
    Thanks

    We are solaris 10. ETL job on 10.2.0.3 was taking 2 hours and now its taking 11 hours.
    I beleive commit is a issue because
    1. Source query (Extraction) has same explain plan.
    2. Source database is still on 10.2.0.3
    3. Servers and network is same for 10g and 11g.
    4. I already experienced COMMIT=y performance issue in import.
    Thanks

  • Performance issue when moving from 4.6c to 6.0

    Looks like we took a performance hit when moving from 4.6c to 6.0 on the same hardware. 
    For example, our PGI on outbound deliveries now takes several seconds longer in 6.0 then it did in 4.6c.  We've noticed other SAP area's are taking longer as well.  With an ST05 sql trace on the PGI in 6.0, we see more tables being accessed, same tables being access differently taking longer.  All leading to more time to do a PGI in 6.0.
    Has anyone else noticed this increase in Response time when upgrading to 6.0 ?   
    Thanks,
    David

    Hello.
    Yes, in our system we noticed the same thing at start. The general performance seemed to get worse after upgrading. In fact, database became bigger and more memory is consumed. However, it was tuned up by our system administrator and now seems ok.
    Run the optimizer statistics and see if it get's better.
    Regards.
    Valter Oliveira.

  • Lumia 800, performance issue when listening to mus...

    I am using Spotify on Lumia 800. If I use any network using apps (IE, FB, Twitter) the music will pause playing (actually the play stutters) while it's getting data from internet. It happens even if I am using WLAN and I am listening to offline playlist, so it's definitely not a bandwith issue rather than phone performance issue. Any steps I could try to work around this?
    Solved!
    Go to Solution.

    It's an issue with the spotify app, it's in need of a major update to correct issues like this. It behaves like this on non-nokia windows phones too.
    The workaround I use is to make tracks available for offline listening.

  • Performance Issue when turning on Retention Policy

    Hello,
    I am currently in the process of enabling retention policy for my organization. We have 18,000+ mailboxes. How much of a performance impact (if any) will it be to the end users if we were to enable retention polices for so many mailboxes at once?
    We have not had retentions enabled for about 3 years and the goals to basically delete/purge anything over 15 months. I tried doing some research on this and can't seem to find any topics pertaining to this particular question.
    Thanks,
    Emmanuel
    Emmanuel Fumero Exchange Administrator

    The user impact of enabling retention on this many mailboxes at once is not your primary issue - your primary issue will be how long it will take to complete the process on all of your mailboxes so it can "just keep up" from that point onward. 
    I'll give you some information about our own results and let you decide how you wish to implement retention for your users.
    We have 14,000+ mailboxes running on Exchange 2010, fully updated.  We migrated three years ago from "another messaging platform" well known for allowing users to have huge mailboxes with little or no capability to restrict how large the
    mailboxes grew - and they grew.  When I started here, our average mailbox size had just popped over 2GB, and our corporate target mailbox size limit was 2GB.  We had mailboxes over 30GB, and messages older than 10 years (our corporate retention policy
    states no emails older than 7 years is allowed).  We were tasked with implementing retention across the board, with the following settings:
    Daily emails will be kept only for 60 days, and can be recovered (if necessary) for another 60 days
    Working emails (things needed for projects, etc) can be kept for 2 years, and can be recovered (if necessary) for another 60 days
    Messages required for legal reasons can be kept for up to 7 years, and can't be recovered if they are deleted
    Messages saved for legal reasons do not count against the 2 GB mailbox size limit
    Due to my previous experience implementing retention (when I was a consultant focused on Microsoft Exchange), I knew that deploying to all users at once would cause a hit on our servers (your current fear, and well founded), but I had only deployed to smaller
    organizations, and never in so radical a manner - previously, size limits had been in effect, so retention was just to "maintain the status quo", rather than to hack huge amounts of stale data from the system.  However, we did have huge servers
    (72 GB RAM and 24CPUs, but multirole) that, early on, were not highly taxed (under 15% CPU average during the peak timeframe, and we are in only two time-zones, so all 13,000 users will connect during the 7:00-10:30 Eastern timeframe).  So we determined
    that we would ramp retention across the organization in the same manner that we migrated - each 1000 users would get retention enabled weekly in a phased manner.
    When we started our first group, the IT group (naturally), most users kept their mailboxes relatively small - only 1.8 GB average - so retention ran from Thursday (the day we kicked off the implementation) through the weekend.  However, as each subsequent
    group was placed into retention, we found that the retention task was taking longer and longer.  Toward the end, we had to slip some of our retention groups a day or so because retention for the previous mailboxes hadn't completed.  We attribute
    this to two factors - 1) the mailboxes were larger on average, and B) there were also more and more mailboxes in retention already that also had to be handled.
    In the end, we took 22 weeks to fully implement retention across the organization, and the process took another two weeks to get to the point that is was "running the status quo", rather than yanking huge volumes of stale email out of mailboxes. 
    During implementation, our servers' performance numbers were good - still under 30% CPU - and users weren't complaining about poor performance due to that (cached mode is a godsend), but getting the process to complete was our sticking point.  If
    you turn on retention for all 18,000 of your mailboxes at once, you will see some server performance degradation (especially if your servers aren't as beefy as ours), but the process may never have a chance to complete across the board.  Our 22 weeks
    may well take over a year for your mailboxes.
    HTH - you are going to need all the help you can get.  (-;

  • Weird Issues when Internet Connection not working

    I hope I'm posting this in the correct forum.
    This morning, my internet connection was not working (a common issue, whatever it is that requires me to restart my router.  Tubes clogged, etc etc.  I could ping it, but not get out).  Anyways, before I restarted the router and restored internet connectivity, my machine took forever to boot and start X.  During boot, it would hang at starting the CUPS daemon for a good 20-30 seconds.  Then, it would eventually start and the machine would boot to agetty.  I could log in fine, but when I typed 'startx' the machine would do nothing for about 5 seconds, and then X would start as per usual.  Note that after issuing the startx command I got no output at all to stdout until after the delay.
    Question:  Why did this happen?  My best guess is that on these occasions the machine was trying to connect to the outside world, and it was only after a network timeout that things went ahead as usual.  But, that hardly makes any sense to me, for a couple of reasons.  Why would CUPS and startx try to connect to the outside? The lag in starting CUPS was considerably longer than when issuing startx.  The lag for startx did not seem long enough for a timeout.  Weird.  Any clarification would be helpful.
    Last edited by hbarnwheeler (2009-04-13 14:26:25)

    It might something wrong with the IP addressing you used when you statically set the configuration or traffic from the IP address you used is denied.
    As you already have an IP config that works (Run ipconfig /all to get it) then you can simply proceed using one of these ways:
    On your DHCP level, do a reservation of the IP address so that it will always be allocated to your server
    or on your DHCP level, exclude the IP address of the server and set it manually on the server
    More information if you contact your Network Engineer for assistance.
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

Maybe you are looking for

  • How to subtract a day from the presentation variable @{system.currentTime}

    Hello, How can subtract a day from the presentation variable - @{system.currentTime} I use the above as a title in the report. However I want to subtract a day from the above variable. How can I accomplish it? Thanks.

  • I would like to copy and paste photos from iphoto to a cd for use

    I would like to copy and paste photos from iphoto to a cd for use.  How can I do this so it can be used for further work in other computers without iphoto? Steve

  • How to show the VALUE as the Column Header using SQL query?

    Hi I have a requirement to show the picked value as the column header using SQL query. Example: ====== SELECT EMPNO FROM EMP WHERE EMPNO=7934; Result Should be: 7934 7934

  • First time using Skype to call

    Hello! I live in Fla and I am goint to Brazil. Because this is the first time, I want be sure I will do the right thing: I have skype on my cell phone, i can buy credits to talk with my family and friends in Brazil and USA. I need to be in a place wi

  • Create custom table in MSA

    Hi Gurus, I have a custom table in CRM online (eg. ZABC). I need to use it in MSA application. How to send it to a mobile machine? What are the steps to make is (bdoc, smo-table, sync-object...)? regards, Zoltán Károlyi ps: SAP CRM 4.0 sp12