MatchCode for 100.000 rows = dump

Hi!
I use FM F4IF_INT_TABLE_VALUE_REQUEST for a personal matchcode in a dynpro field.
But if the internal table has got 100.000 rows, the system dump.
How can I do for display the match code without dump?
Thanks very much!

A matchcode where you have more than 100.000 rows is not a good matchcode !
You should provide at least some criterion to restrict the list. The maximum number of hits is only 4 digits in SAP and you should always restrict your list according to this maximum
you do this by adding to your SELECT statement:
up to callcontrol-maxrecords rows

Similar Messages

  • Display 100,000 rows in Table View

    Hi,
    I am in receipt of a strange requirement from a customer, who wants a report which returns about 100,000 rows which is based on a Direct Database Request.
    I understand that OBIEE is not an extraction tool, and that any report which has more than 100-200 rows is not very useful. However, the customer is insistent that such a report be generated.
    The report returns about 97,000 rows and has about 12 columns and is displayed as a Table View.
    To try and generate the report, i have set the ResultRowLimit in the instanceconfig.xml file to 150,000 and restarted the services. I have also set the query limits in the RPD to 150,000, so this is not the issue as well.
    When running the report, the session log shows the record count as 97,452 showing that all the records are available in the BI Server.
    However, when i click on the display all the rows button at the end of the report, the browser hangs after about 10 minutes with nothing being displayed.
    I have gone through similar posts, but there was nothing conclusive mentioned in them. Any input to fix the above issue will be highly appreciated.
    Thanks,
    Ab
    Edited by: obiee_user_ab on Nov 9, 2010 8:25 PM

    Hi Saichand,
    The client wants the data to be downloaded in CSV, so the row limit in the Excel template, that OBIEE uses is not an issue.
    The 100,000 rows that are retrieved is after using a Dashboard Prompt with 3 parameters.
    The large number of rows is because these are month end reports, which is more like extraction.
    The customer wants to implement this even though OBIEE does not work well with large number of rows, as there are only a couple of reports like this and it would be an expensive proposition to use a different reporting system for only 3-4 reports.
    Hence, i am on the lookout for a way to implement this in OBIEE.
    The other option is to directly download the report into CSV, without having to load all the records onto the browser first. To do the same, i read a couple of blog entries, but the steps mentioned were not clear. So any help on this front will also be great
    Thanks,
    Ab

  • Pk index design for 500,000 row table

    I have 3 tables as following relation:
    http://www.visionfly.com/images/dgm.png
    Intinerary - (1:N) FlightItem - (1:N) CabinPrice
    DepCity, ArrCity, DepDate in Intinerary represent identity of one row in the table. I want to reduce space in FlightItem and Cabin, I add a field of flightId(autoIncrease) as pk of Intinerary. Also I add an index for DepCity, ArrCity, DepDate in Intinerary.
    FlightId and FlightNo is pk of FlightItem. FlightId is Fk of FlightItem. FlightId, FlightNo,Cabin,priceType is pk of CabinPrice. FlightId, FlightNo is Fk of CabinPrice.
    Interaray will keep about 10,000 rows.
    FlightItem will keep about 50,000 rows.
    CabinPrice will keep about 500,000 rows.
    These 3 tables can regard as a whole. There 2 method operations in them. One is
    select * from itinerary a, flightitem f, cabinprice c where a.flightId=f.flightId and f.flightId=c.flightId and f.flightNo=c.flightNo
    and a.depcity='zuh' and a.arrcity='sha' and a.depdate='2004-7-1'.
    It use index of Intinerary.
    There 100 times of select in 1 seconds in highest hits.
    Another operation is delete and add new one. I use cascade delete. delete where cause is the same as select. The highest hit is 50 operations in one minutes.
    I intent to use ejb cmp to control them. Is the good design for above performance request? Any suggestion will be appericated.
    Stephen

    this is current design base ms-sql. We are planning to move to Oracle. Ignore data type design.
    Stephen

  • Backup tips for 100,000+ images/yr

    I shoot roughly 100,000 images a year and I am on the road quite a bit. I am currently using a Macbook Pro w/256GB SSD so space is limited.  I pretty much just keep the images from the current week before I move them to a backup drive. 
    Any suggestions to access the archive on the road?  I currently have a few Travel HD's that are roughly 2TB each. I often get requests where I need to access a specific RAW file when I am out of town.  Any Cloud based solutions? Should I explore a home server?

    Cloud Storage & Unlimited Online Backup | Livedrive

  • Costs for 1.000 rows of PL/SQL code?

    Hi to all of you!
    What do you think how much time is necessary to develop one PL/SQL package (from first analysis to end-production) with 1.000 lines of code?
    Averaged complexity, specification, .... of the requirements.
    All estimations are welcome...

    Hi to all of you!
    What do you think how much time is necessary to
    develop one PL/SQL package (from analysis to
    production) with 1.000 lines of code?Depends on the task, I'd say. What is that PL/SQL
    package supposed to do?
    Averaged complexity, specification, .... of the
    requirements.Please define averaged complexity, specification, etc.
    All answers are welcome...Try this:
    SELECT TRUNC(ABS(dbms_random.NORMAL) * 10) no_of_dev_days_for_package
      FROM dual
    It may be a wild guess, but is that some kind of in-house
    quiz?
    C.

  • Installing Lightroom 5 for first time. Chose standard previews of the 100,000+ photos on removable hard drive. Lightroom stopped creating previews after the first 10,000 or so pictures. Don't see how to start it moving forward again. Thanks!

    I think I did everything correctly. Moved pictures to the external drive as per Microsoft's instructions for Windows 8. All worked fine. Everything else in Lightroom seems to work fine. However, it just stopped creating standard size previews.

    Glad you had success. You can check to see how many preview files have been built by checking the Lightroom 5 Catalog Previews.lrdata folder with File Explorer:
    /Users/[user name]/Pictures/Lightroom/Lightroom 5 Catalog Previews.lrdata
    Right-click on the folder and select 'Properties.' Next to 'Contains' will be the file count representing the number of built previews. It should be the same as the number of pictures (100,000) or slightly more.
    Creating previews for 100,000 image files will take a long time! My Windows 7 i7-860 processor system with 21 mp Canon 5D MKII raw files takes about 3 seconds to build one standard preview. Using this number for 100,000 previews:
    100,000 x 3 seconds = 300,000 sec. = 5,000 min. = 83 hours = 3.5 days!

  • If you can figure this out, I will pay $100,000,000!!!

    If anyone can figure out how to stream videos between itunes using the "shared library", I WILL PAY YOU 100,000,000!!!!! Any videos, even video podcasts. Stream between two itunes from mac to windows

    aapl.up wrote:
    If anyone can figure out how to stream videos between itunes using the "shared library", I WILL PAY YOU 100,000,000!!!!! Any videos, even video podcasts. Stream between two itunes from mac to windows
    The answer is both easy and frustrating.
    First the easy part.
    1. Add the videos to iTunes on the Mac (you did say from Mac to PC)
    2. If the video is a Music video, set the flag appropriately
    3. Tell iTunes on the Mac to share its iTunes library over the network
    4. Make sure either you share the entire library, or a playlist that includes the video(s)
    5. Set the PC to look for shared libraries on the network
    The above is probably obvious and you have probably done this. The next bit is to address the cause of the problem.
    First, I have various music videos acquired from different sources. Some from the iTunes Store before they started charging for them, some from other sites on the Internet. All are in QuickTime compatible format and hence all play locally in iTunes. However most of them do not work when I try and access them via iTunes Sharing.
    Now I had a strong idea what the cause was but here are possible causes that could be considered
    1. Its the wrong format (even if it can be played locally), i.e. not MPEG4 or H.264
    2. Its the wrong pixel size or bit rate
    3. It did not have the "Music Video" flag set
    4. Its too long in duration or file size
    5. It has not been prepared for streaming
    To put you out of your misery it appears to be number 5. Now I had already suspected this because I had previously seen reports that FrontRow (on a Mac) had problems playing videos from another Mac if the video had not been prepared for streaming.
    I have found two reasonably easy ways to convert videos so they are prepared for streaming.
    1. If you have QuickTime Pro (for Mac or Windows) you can export the video using the QuickTime Player and in options in the export dialog box, set it to enable the streaming option.
    2. If you select the video in iTunes itself, you can tell iTunes to convert it to iPod or AppleTV format, this will at the same time also set the streaming option
    I tested both methods using a video that previously would not work between Mac and Windows iTunes sharing and both these solutions worked. This was tested using iTunes 7.4.2 on Mac OS X 10.4.10 and iTunes 7.4.2 on Windows XP Pro.
    I look forward to receiving your cheque for $100,000,000

  • Best practice for making a report of 10,000 to 20,000 rows(OBIEE 10.3.4.1)

    My Scenario is like this:*
    Hi i have 2 fact tables fact1 and fact 2 and four dimension tables D1,D2,D3 ,D4 & D1.1 ,D1.2 the relations in the data model is like this :
    NOTE: D1.1 and D1.2 are derived from D1 So D1 might be snow Flake.
    [( D1.. 1:M..> Fact 1 , D1.. 1:M..> Fact 2 ), (D2.. 1:M..> Fact 1 , D2.. 1:M..> Fact 2 ), ( D3.. 1: M.> Fact 1 , D3.. 1:M..> Fact 2 ),( D4.. 1:M..> Fact 1 , D4 ... 1:M..> Fact 2 )]
    Now from D1 there is a child level like this: [D1 --(1:M)..> D1.1 and from D1.1.. 1:M..> D1.2.. 1:M..> D4]
    Please help me in modeling these for making a report of 10,000 rows and also let me know for which tables do i need to enable cache?
    PS: There shouldn't be performance issue so please help me in modeling this.
    Thanks in Advance for the Experts who are helping me for a while.

    Shudn't be much problem with just these many rows...
    Model something like this only Re: URGENT MODELING SNOW FLAKE SCHEMA
    There are various ways of handling performance issues if any in OBIEE.
    Go for caching strategy for complete warehouse. Make sure to purge it after every data load..If you have aggr calculations at higher level then you can also go for aggregated tables in OBIEE for better performance.
    http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
    Hope this is clear...Go ahead with actual implementation and lets us know incase you encounter any major issues.
    Cheers

  • 100 Phones for 185,000 People in Boise ID????

    100 Phones for 185,000 People in Boise ID????
    Are they kidding? No Apple store, only 2 AT$T Stores.
    Thanks guys!
    http://boise.areaconnect.com/statistics.htm

    You must use a credit card  registered with your name and address.
    There is no other option.

  • Processing a cursor of 11,000 rows and Query completed with errors

    So I have 3rd party data that I have loaded into a SQL Server Table. I am trying to determine if the 3rd party Members reside in our database by using a cursor and going through all 11,000 rows...substituting the #Parameter Values in a LIKE statement...trying
    to keep it pretty broad. I tried running this in SQL Server Management Studio and it chunked for about 5 minutes and then just quit. I kind of figured I was pushing the buffer limits within SQL Server Management Studio. So instead I created it as a Stored
    Procedure and changed my Query Option/Results and checked Discard results after execution. This time it chunked away for 38 minutes and then stopped saying
    Query completed with errors. I did throw a COMMIT in there thinking that the COMMIT would hit and free up resources and I'd see the Table being loaded in chunks. But that didn't seem to work.
    I'm kind of at a loss here in terms of trying to tie back this data.
    Can anyone suggest anything on this???
    Thanks for your review and am hopeful for a reply.
    WHILE (@@FETCH_STATUS=0)
    BEGIN
    SET @SQLString = 'INSERT INTO [dbo].[FBMCNameMatch]' + @NewLineChar;
    SET @SQLString = ' (' + @NewLineChar;
    SET @SQLString = ' [FBMCMemberKey],' + @NewLineChar;
    SET @SQLString = ' [HFHPMemberNbr]' + @NewLineChar;
    SET @SQLString = ' )' + @NewLineChar;
    SET @SQLString = 'SELECT ';
    SET @SQLString = @SQLString + CAST(@FBMCMemberKey AS VARCHAR) + ',' + @NewLineChar;
    SET @SQLString = @SQLString + ' [member].[MEMBER_NBR]' + @NewLineChar;
    SET @SQLString = @SQLString + 'FROM [Report].[dbo].[member] ' + @NewLineChar;
    SET @SQLString = @SQLString + 'WHERE [member].[NAME_FIRST] LIKE ' + '''' + '%' + @FirstName + '%' + '''' + ' ' + @NewLineChar;
    SET @SQLString = @SQLString + 'AND [member].[NAME_LAST] LIKE ' + '''' + '%' + @LastName + '%' + '''' + ' ' + @NewLineChar;
    EXEC (@SQLString)
    --SELECT @SQLReturnValue
    SET @CountFBMCNameMatchINSERT = @CountFBMCNameMatchINSERT + 1
    IF @CountFBMCNameMatchINSERT = 100
    BEGIN
    COMMIT;
    SET @CountFBMCNameMatchINSERT = 0;
    END
    FETCH NEXT
    FROM FBMC_Member_Roster_Cursor
    INTO @MemberIdentity,
    @FBMCMemberKey,
    @ClientName,
    @MemberSSN,
    @FirstName,
    @MiddleInitial,
    @LastName,
    @AddressLine1,
    @AddressLine2,
    @City,
    @State,
    @Zipcode,
    @TelephoneNumber,
    @BirthDate,
    @Gender,
    @EmailAddress,
    @Relation
    END
    --SELECT *
    --FROM [#TempTable_FBMC_Name_Match]
    CLOSE FBMC_Member_Roster_Cursor;
    DEALLOCATE FBMC_Member_Roster_Cursor;
    GO

    Hi ITBobbyP,
    As Erland suggested, you can compare all rows at once. Basing on my understanding on your code, the below code can lead to the same output as yours but have a better performance than cursor I believe.
    CREATE TABLE [MemberRoster]
    MemberKey INT,
    FirstName VARCHAR(99),
    LastName VARCHAR(99)
    INSERT INTO [MemberRoster]
    VALUES
    (1,'Eric','Zhang'),
    (2,'Jackie','Cheng'),
    (3,'Bruce','Lin');
    CREATE TABLE [yourCursorTable]
    MemberNbr INT,
    FirstName VARCHAR(99),
    LastName VARCHAR(99)
    INSERT INTO [yourCursorTable]
    VALUES
    (1,'Bruce','Li'),
    (2,'Jack','Chen');
    SELECT * FROM [MemberRoster]
    SELECT * FROM [yourCursorTable]
    --INSERT INTO [dbo].[NameMatch]
    --[MemberNbr],
    --[MemberKey]
    SELECT y.MemberNbr,
    n.[MemberKey]
    FROM [dbo].[MemberRoster] n
    JOIN [yourCursorTable] y
    ON n.[FirstName] LIKE '%'+y.FirstName+'%'
    AND n.[LastName] LIKE '%'+y.LastName+'%'
    DROP TABLE [MemberRoster], [yourCursorTable]
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • Displaying a report with 250 000 rows in BI Publisher 11.1.1.6 == very slow

    Hi,
    I try to display a report with 250 000 rows in bi publisher 11.1.1.6.
    Running the SQL Request in TOAD take 20s.
    From bi publisher 11.1.1.6 this operation take more than 2 hour without result.
    The temp file show an xml file which increase (53 M to 70M to 100 M)
    I configure jvm (1.6_029) with the following parameters : Xms512m - Xmx2048 -XX:MaxPermSize=3072m
    My configuration is the following :
    REHL5 64bits
    8G RAM
    100G file system and 50 GB tmp file for bi publisher
    4 CPU
    Jdk Parameters:
    Xms512m -Xmx2048m -XX:MaxPermSize=3072m -XX:UseParallelGC.
    Total CPU usage : 25%
    Live Threads : 85 threads
    Used : 665 Mb
    Commited : 908 Mb
    GC time :
    8.047 s on PS MarkSweep (3 collections)
    8.625 s on PS Scavenge (242 collections)
    Any idea to increase performance or other will be appreciate.
    Thank you
    Mams

    If you are generating a PDF output, select "PDF Compression" option in the properties. Ensure you reduce all the log levels to "Low". Ensure there are no (or minimal) calculations/formulas in the report template.

  • I bought a new computer (Windows 8) and plugged in my 2TB drive containing 100,000 songs and iTunes won't recognize 5,000 of them no matter what I do other than re-encode them.  Any ideas?

    All I can figure out to do is to sort the "unknown artist" and "unknown album" tracks by album name (yes, it shows the name of the artist and album in Windows Explorer ironically) and then move each album's worth of songs to the library which then show up as Unknown - and then I can re-encode the tracks to go where they're supposed to be but all track number info is also gone so I have to edit each track to add track numbers.  This is a nightmare of gigantic proportion that will take months to fix.  It's like all of these 5,000 tracks (out of 100,000) were not encoded properly to begin with although I can't imagine how that happened since I encoded thme from CDs originally.  What a mess!  Any ideas?  Please join in.
    P.S. why do they never include iTunes itself as the option that you're having a problem with?

    See Repair security permissions for iTunes for Windows.
    tt2

  • Logical Systems for client 000 and 001

    I've just install solman 7.0 ehp1 sr1 and am going to create a new client from 001.
    Do I need to create logical systems for clients 000 and 001?
    Do I need to creat rfc's for these?
    Thanks,
    Daniel

    no, no need.
    you will create a logical system name for the copied client (such as 100)

  • Oracle 10g - To find the corresponding record for a certain row

    Hi all,
    The scenario is like this - Suppose I've got a table with 100+ columns. For a certain row inside, I need to find its corresponding record which is in the same table. The way how I define "corresponding" here is - these two rows should be identical in all attributes but only different in one column, say "id" (primary key).
    So how could I achieve this? What I can think of is to fetch all columns of the first row into some pre-defined variables, then use a cursor to loop the table and match the values of the columns of each row to those variables. But given that we've got 100+ rows in the table, this solution doesn't look practical?
    Any advises are greatly appreciated. Thanks.

    something to play with as Solomon suggested (use some other string aggregation technique if you're not 11g yet)
    you'll have to adjust the column_list accordingly
    select 'select ' || column_list ||
           ' from ' || :table_name ||
           ' group by ' || column_list ||
           ' having count(*) > 1' the_sql
      from (select listagg(column_name,',') within group (order by column_id) column_list
              from user_tab_cols
             where table_name = :table_name
           )Regards
    Etbin
    Edited by: Etbin on 25.12.2011 16:53
    Sorry, I'd better leave the forum: the title says you're 10g :(
    Providing a link for replacing listagg: http://www.sqlsnippets.com/en/topic-11787.html

  • How to append a new entry in a list of 100,000 names without iterating the list each time?

    I have a list of 100,000 + names and I need to append to that list another 100,000 names. Each name must be unique. Currently I iterate through the entire list to be sure the name does not exist. As you can imagine this is very slow. I am new to Java and I am maintaining a 15+ year old product. Is there a better way to check for an existing name?

    We are using a Java list because that is how the original developers coded it. I don't think they planned for that many entry's. I know I need to re factor this which is why I am asking for opinions on how to  make this more efficient. Currently we don't use a database for anything in the product so I would like to stay away from that if possible.
    Ok - but it still raises the question in my mind as to how that data is being used.
    I gave you a couple of options that will take care of the UNIQUE requirement (HashTable, HashMap) but the BEST solution depends on:
    1. How often new entries are made
    2. How often entries are deleted
    3. How often entries are changed
    4. How often entries are accessed
    5. How the data is actually used
    If you just have a one time requirement to merge the two lists then just do it and get it over with - it won't really matter how you do it.
    But Hash classes will present their own performance issues if the typical access is for many, most or all of that 200k+ entries.
    Without knowing the full set of requirements we can't really know just what part of the process needs to be optimized.

Maybe you are looking for

  • Batch Classification Values not getting changed

    Hi Gurus, I am using the following bapi ( BAPI_OBJCL_CHANGE ) to change Batch classification values Eg: Cut clarity etc. when i pass these values thru the function module in the program it is not getting changed while when i change it through msc2n i

  • Automated loading and executing array of presets sequentially

    Hi, I have built a system that allows a user to set certain parameters and send those parameters to another program using an event case.  In order to have my controls update in real time I had to use nested while loops.  The inner loop contains the c

  • How to keep socket alive

    Dear All Java Devotees.... I was stucked upon the following socket programming challenge in my projects first of all I have one Server and one Client what I wanted initially was client registered to server when the client abruptly terminated, it wont

  • Problems restoring Ipad 3

    I am having problems with my Ipad 3. I went to use it and suddenly i wasnt able to input my password in the unlock screen. I could press the numbers but nothing would seem to show up in the 4 slots to input my password. The screen wasnt frozen and i

  • Implement Search Engine in Oracle ADF 11g

    Hi All, I am using Oracle jDev 11.1.1.5.0 I want to implement Search Engine within my application. User of my application will search for anything which is very related to my application only. User can't use it like Google/Yahoo/..etc. He/She can onl