VO: Large Number of Data

Hi,
i have an issue with large set of data, one of my requirement is to select all row in the table, so ctrl + A will quickly select them all, but if i want to do operation on the selected record, the server crashes because it have like 2,00,000+ rows and while(rowSet.hasNext()) will take a very huge memory.
can any one please help me with this situation.
Thanks
jdev: 11.1.1.3.0

Thanks Timo for the reply, our applications has all most all tables with multiselect and CTRL + A works for all, and operations depends, user can select all and delete, export programmatically, loop through each selected row to find something, so i am wondering how can we do it.
I normally end up doing a virtual select all, meaning that I set a flag which gets interpreted in method (in the application module or the vo) and the method acts on all rows. i am not clear on this. Please can you give an example.
and i am also not clean, how can we do select all visible row.
Thank you so much.

Similar Messages

  • Fastest Way to Delete a large number of data

    I m trying to delete a large number of Data
    this Proc will be running every day without disturbing the other Process
    SET sql = 'DELETE TOP (100) FROM ' + @TableName + ' WHERE Id IN (Select Distinct Id FROM [state] WHERE ToDel = 1)'SET @Deleted_Rows = 1;SET @Rows = 0; WHILE (@Deleted_Rows > 0)BEGIN BEGIN TRANEXEC sp_executesql @sql SET @Deleted_Rows = (SELECT @@ROWCOUNT);SET @Rows = (@Rows + @Deleted_Rows);COMMIT TRANWaitFor DELAY '00:00:00:01' END
    Have you any Idea how can I optimize my Query ?
    Thank you very much

    Hi, following are few thoughts.
    1. If you can manage without dynamic sql then it may yield better performance as except table name there is no need for it as per your query.
    2. Ensure indexes are in place.
    3. If the table in question is not participant in replication then you may slightly go for higher batch size and again it depends on volume.
    Please take a look at the tweaked code.
    declare @Deleted_Rows int, @Rows int;
    SET @Deleted_Rows = 1;
    SET @Rows = 0;
    WHILE (@Deleted_Rows > 0)
    BEGIN
    BEGIN TRAN;
    DELETE TOP (100) t1
    FROM
    tabName as t1
    WHERE exists (Select * FROM [state] as t2 WHERE t2.ToDel = 1 and t2.Id = t1.id);
    SET @Deleted_Rows = @@ROWCOUNT ;
    SET @Rows = @Rows + @Deleted_Rows;
    COMMIT TRAN;
    WaitFor DELAY '00:00:00:01';
    END
    Print 'Total rows deleted : '+convert(varchar(50), @Rows);

  • How to optimize DELETE for large number of data?

    Hi Guys,
    I am creating a DELETE sql for deleting large amount of data, approximately a million records. When I tested it, it got a lot amount of execution time. Is there a way how can I optimize this query?
    I highly appreciate any idea from you guys!
    Thanks,
    Jay

    JayPavia wrote:
    I am creating a DELETE sql for deleting large amount of data, approximately a million records. When I tested it, it got a lot amount of execution time. Is there a way how can I optimize this query?
    I highly appreciate any idea from you guys!Jay,
    one quite nice approach is described in the AskTom thread: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2345591157689#2356959585102
    In a nutshell: You can create a partitioned table with the same layout (using e.g. a RANGE partitioning with a single default partition), insert the remaining data into that table and swap the two segments using ALTER TABLE EXCHANGE PARTITION.
    This allows you to load the data into the partitioned table using all means available to you, e.g. direct-path parallel insert with nologging attribute of the table to make it as fast as possible, rebuilding/creating any indexes in bulk after the load completed.
    Furthermore, at least seamless read-access to the table is possible while you're performing this operation, and you don't have to deal with renaming/granting etc. which you would need to do with a traditional CTAS approach.
    Of course, if the table needs to be online for DML while performing the DELETE you can't use these options.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • XML file for large number of data records

    I have to create an xml file using the data from multiple table. The problem That i am facing is the data is huge it is in millions so I was wondering that is there any efective way of creating such an xml file.
    It would be great if you can suggest some approach to achieve my requirement.
    Thanks,
    -Vinod

    You'd probably be better asking over in the XML DB forum as that forum is dedicated to dealing with the XML side of things.
    The FAQ in that forum is here: {thread:id=410714}

  • Plotting an array with a large number of data points

    Hello All,
    I am collecting data every second for 24 Hrs. How do I plot this on a XY plot? I have the data collection setup in a while loop and my initial attempt was to use a shift register and a build array to make the array and that works OK. The problem is I do not know if this is a wise option in terms of memory usage. (I end up making 3 arrays and 3*86400=259200 data points in 3 shift registers).
    What is the better way to plot 86400 data points in a XY plot? I am also writing the data to a file every second.
    Thanks,
    KB

    Allan,
    Thanks for the message. I am attaching with this message a ccpy of my vi where you can see that I am collecting data from 3 counters, writing them to a file and displaying them. Can you comment on this Vi with regard to memory management?
    Yes, I would like to see your examples. Thanks once again.
    KB
    Attachments:
    PC21.vi ‏418 KB

  • Efficient method to insert large number of data into table

    Hi,
    I have a procedure that accepts an input parameter, that contains, a comma seperated values as input.
    Something like G12-UHG,THA-90HJ,NS-98039,........There can be more than 90,000 values in that comma seperated input paramter.
    What is the most efficient way to do an insert in this case?.
    3 methods I have in mind are :
    1) Get individual tokens from CSV and use a plain old loop and do an insert.
    2) Use BULK COLLECT & FOR ALL. However I don't know how to do this, since input is not from cursor, rather a parameter.
    3) Use Table collections. Again this involves plain old looping through the collection. Same as 1st method.
    Please do suggest the most efficient method.
    Thanks

    90,000 values?? Whats the data type of the input parameter?
    you can use the string to row conversion trick if you want and do a single insert
    SQL> with t as (select 'ABC,DEF GHI,JKL' str from dual)
      2  select regexp_substr(str,'[^,]+', 1, level) list
      3  from t connect by level <= NVL( LENGTH( REGEXP_REPLACE( str, '[^,]+', NULL ) ), 0 ) + 1
      4  /
    LIST
    ABC
    DEF GHI
    JKL Edited by: Karthick_Arp on Feb 13, 2009 2:18 AM

  • I have a large number of photos imported into iPhoto with the dates wrong.  How can I adjust multiple photos (with varying dates) to the same, correct, date?

    I have a large number of photos imported into iPhoto with the dates wrong.  How can I adjust multiple photos (with varying dates) to the same, correct, date?

    If I understand you correctly, when you enter a date in the Adjust Date and Time window, the picture does not update with the date you enter.  If that is the case then something is wrong with iPhoto or your perhaps your library.
    How large a date change are you putting in?  iPhoto currently has an issue with date changes beyond about 60 years at a time.  If the difference between the current date on the image and the date you are entering is beyond that range that may explain why this is not working.
    If that is not the case:
    Remove the following to the trash and restart the computer and try again:
    Home > Library > Caches > com.apple.iphoto
    Home > Library > Preferences > com.apple.iPhoto (There may be more than one. Remove them all.)
    ---NOTE: to get to the "home > library" hold down option on the keyboard and click on "Go" > "Library" while in the Finder.
    Let me know the results.

  • How to show data from a table having large number of columns

    Hi ,
    I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
    Is it possible to design report in below format(half columns on one side of page, half on other side of page :
    Column1
    Data
    Column11
    Data
    Column2
    Data
    Column12
    Data
    Column3
    Data
    Column13
    Data
    Column4
    Data
    Column14
    Data
    Column5
    Data
    Column15
    Data
    Column6
    Data
    Column16
    Data
    Column7
    Data
    Column17
    Data
    Column8
    Data
    Column18
    Data
    Column9
    Data
    Column19
    Data
    Column10
    Data
    Column20
    Data
    I am using Apex 4.2.3 version on oracle 11g xe.

    user2602680 wrote:
    Please update your forum profile with a real handle instead of "user2602680".
    I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
    Is it possible to design report in below format(half columns on one side of page, half on other side of page :
    Column1
    Data
    Column11
    Data
    Column2
    Data
    Column12
    Data
    Column3
    Data
    Column13
    Data
    Column4
    Data
    Column14
    Data
    Column5
    Data
    Column15
    Data
    Column6
    Data
    Column16
    Data
    Column7
    Data
    Column17
    Data
    Column8
    Data
    Column18
    Data
    Column9
    Data
    Column19
    Data
    Column10
    Data
    Column20
    Data
    I am using Apex 4.2.3 version on oracle 11g xe.
    Yes, this can be achieved using a custom named column report template.

  • Best practice for handling data for a large number of indicators

    I'm looking for suggestions or recommendations for how to best handle a UI with a "large" number of indicators. By large I mean enough to make the block diagram quite large and ugly after the data processing for each indicator is added. The data must be "unpacked" and then decoded, e.g., booleans, offset binary bit fields, etc. The indicators are updated once/sec. I am leanding towards a method that worked well for me previously, that is, binding network shared variables to each indicator, then using several sub-vis to process the particular piece of data and write to the appropriate variables.
    I was curious what others have done in similar circumstances.
    Bill
    “A child of five could understand this. Send someone to fetch a child of five.”
    ― Groucho Marx
    Solved!
    Go to Solution.

    I can certainly feel your pain.
    Note that's really what is going on in that png  You can see the Action Engine responsible for updating the display to the far right. 
    In my own defence: the FP concept was presented to the client's customer before they had a person familliar with LabVIEW identified.  So I worked it this way from no choice of mine.  I knew it would get ugly before I walked in the door and chose to meet the challenge head on anyway.  Defer Panel Updates was my very good friend.  The sensors these objects represent were constrained to pass info via a single ZigBee network so I had the benefit of fairly low data rates as well but even changing view (Yes there is a display mode that swaps what information is displayed for each sensor) was updated fast enough that the user still got a responsive GUI.
    (the GUI did scale poorly though!  That is a lot of wires!  I was greateful to Jack for the Idea to make align and distribute work on wires)
    Jeff

  • Large number of attributes reduces data display in Query (PLS HELP).

    Dear Guru's
    We have newly upgraded our system to 7.0, Currently i have a Query based on master data info object(0 equipment) which contain several attributes, due to huge number of attributes, i cannot see data for few  attributes( Ex:-0service descrip), but data exists for these atrrib, when i check in RSA1. my client want to use heavy number of 0Euipment numbers(nearly 10000) in selection screen. thats the reason for data not to be seen.
    WE are under SP 16 also(Note 562162 - Additional selection of attribute reduces query result set is allready exists), and did not found any proper note.
    So guru's can any one please tell me how i can get the data in Query for these attribute by keeping the same selection option for Equipment numbers (10000).
    Dude's give me some good suggestion please...client is waiting from 1 week
    Thanks in adv
    Dev
    Edited by: Srinivas dev on Jun 10, 2008 11:52 AM

    Dear Yogesh,
    Thanks a lot dude...
    I assigned points for your kindly help....
    Got any idea about by initial post   Large number of attributes reduces data display in Query
    Thanks and Regards
    Dev

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Open Large amount of data

    Hi
    I have a file on application server in .dat format, it contains large amount of data may be 2 million of records or  more, I need to open the file to check the record count, is there any software or any option to open the file, I have tried opening with Notepad, excel .... it gives error..
    please let me know
    Thanks

    Hi,
    Try this..
    Go to AL11..
    Go to the file directory..Then in the file there will be field called length..which is the total length of the file in characters..
    If you know the length of a single line..
    Divide the length of the file by the length of single line..I believe you will get the number of records..
    Thanks,
    Naren

  • How to handle a large number of query parameters for a Browse screen

    I need to implement an advanced search functionality in a browse screen for a large table.  The table has 80+ columns and therefore will have a large number of possible query parameters.  The screen will be built on a modeled query with all
    of the parameters marked as optional.  Given the large number of parameters, I am thinking that it would be better to use a separate screen to receive the parameter input from the user, rather than a Popup.  Is it possible for example to have a search
    button on the browse screen (screen a) open a new screen (screen b) that contains all of the search parameters, have the user enter the parameters they want, then click a button to send all of the parameters back to screen a where the query is executed and
    the search results are returned to the table control?  This would effectively make screen b an advanced modal window for screen a.  In addition, if the user were to execute the query, then want to change a parameter, they would need to be able to
    re-open screen b and have all of their original parameters still set.  How would you implement this, or otherwise deal with a large number of optional query parameters in the html client?  My initial thinking is to store all of the parameters in
    an object and use beforeShown/afterClosed to pass them between the screens, but I'm not quite sure how to make that work.  TIA

    Wow Josh, thanks.  I have a lot of reading to do.  What I ultimately plan to do with this (my other posts relate to this too), is have a separate screen for advanced filtering that also allows the user to save their queries if desired. 
    There is an excellent way to get at all of the query information in the Query_Executed() method.  I just put an extra Boolean parameter in the query called "SaveQuery" and when true, the Query_Executed event triggers an entry into a table with
    the query name, user name, and parameter value pairs that the user entered.  Upon revisiting the screen, I want the user to be able to select from their saved queries and load all the screen parameters (screen properties) from their selected query. 
    I almost have it working.  It may be as easy as marking all of the screen properties that are query parameters as screen parameters (not required), then passing them in from the saved query data (filtered by username, queryname, and selected
    item).  I'll post an update once I get it.  Probably will have some more questions as I go through it.  Thanks again! 

  • How can I stabilise large transfers of data & retain full smart playlists?

    Hi,
    I hope somebody can help. Now that I have more than 20,000 songs in my iTunes, all of which I want to sync, my ipod classic 160gb goes into the 'infinite reset loop', or, when occasionally appearing to sync correctly, ends up with 0 songs, 0 video, etc. on the iPod.
    This is despite supposedly having some 22GB of unused space.
    I assumed it was by 2-and-a-half-year-old iPod succumbing to old age and bought a replacement, but the problem occurs on the new machine too.
    I have tried various solutions from the Discussion forum, e.g.:
    1) procedures for checking for hard-drive errors
    2) restoring to factory settings and then re-syncing
    3) I found one helpful tip for creating a new smart playlist to introduce the files in stages, which is supposed stabilise the transfer of large amounts of data. While this was helpful, and appeared to stabilise the data, I was left without all my smart playlists (except the one I'd created for this exercise). Transferring back to Syncing the entire library, so as to regain all my smart playlists, merely made the iPod unstable again.
    I really want to be able to access my (extensive) smart playlists: I'm presuming it's the very amount of these that is contributing to the problem. Can anyone suggest a solution?
    Thanks!

    jcannon wrote:
    I am using my notebook whch only has 2GB RAm... I might need to try the workstation PC which has 8GB..
    I did read all the data using the read from text file vi and displayed it all in a strinbg indicator (reading in x number of bytes/lines at a time)... How can I no plot this text/string data?
    8GB won't really help you unless you also use LabVIEW 64bit. 2GB should be OK.
    To graph it, you need to decimate it as shown in the link I gave you earlier. Most likely, your monitor has only a few thousand pixels across, so it is not worth to display more points than that. It won't look any different if done right!
    Can you show us your code? Can you attach a small sample file that has the same structure, just with less data?
    LabVIEW Champion . Do more with less code and in less time .

  • A simple way in sending message on a large number of contacts?

    Is there a simple way in sending a large number of contacts? I mean, i'll send my message on a group, i.e. "hello" to "classmates" group. I know it is present during ios4 &amp; 5 version? Why now on ios7? How's that? Some of my contacts are changing new numbers so i can't add them to my inbox.

    When a device that is connected to an icloud account has it's contacts deleted, the same contacts at icloud.com are also deleted and then the changes propagate to other computers and devices also connected to the same icloud account.   In other words, the contacts are deleted everywhere.  Also, any iOS backup to icloud does not include contacts or any of the other Apple data, like calendars, reminders, etc.  The only way you can get that data back is if it was on a computer which has been frequently backed up.  You restore the data from the backup.  (This reference to "backup" is a local backup, it has nothing to do with icloud.)

Maybe you are looking for