XY graphs under-perf​orm on large data sets

If for example you have 3 signals with 8 million points each and you plot these on a regular waveform graph, the user interface is able to display the data smoothly. All graph palette operations (zoom, scroll etc.) respond in "real-time".
Put the same 3x8 million points on an XY graph, and you have one sluuuuggish user interface. Scrolling is for example no longer possible in any practical fashion.
I'm sure a lot of it has to do with the overhead of having all those X-values (often unnecessarily many - as discussed in this idea), but the performance degradation compared to a regular waveform graph (even if the latter is fed twice the amount of Y values for example) is severe.
Are there ways around this performance issue? Sure. We can e.g. write code that decimates the data we send to the indicator, and refills it when the user zooms or scrolls and therefore needs additional data points. But this requires lots of code, and can never become as transparent/integrated and smooth as an implementation within the indicator itself. 
And competing products are already there, that's what bugs me right now. I've got colleagues that get such functionality "for free" with the graphing tools they have.
So, we're about to develop an XControl that makes it possible to present such large non-continuous data sets in a smooth manner. (Ironically one solution is to add data points so that I have continuous data - and then use the regular graph...) But has anyone already done this? Andhowfaroffisa nativeXYgraphindicatorthatmakes such code obsolete?
MTO

Have a look to the Topic "Lost reference of main controller within popup" Lost reference of main controller within popup
"I hate windows popups" and MVC too.
In newest versions there is a nice popup managed via DHTML (like Web Dynpro does) but basically you should have a common reference to the data somewhere. You can use server side cookies, attributes of your application class, public and static attributes of a specific controller....
Sergio

Similar Messages

  • Large data sets and key terms

    Hello, I'm looking for some guidance on how BI can help me. I am a business analyst in a health solutions firm, but not proficient in SQL. However, I have to work with large data sets that just exceed the capabilities of Excel.
    Basically, I'm having to use Excel to manaully search for key terms and apply a values to those results. For instance, I have a medical claims file, with Provider Names, Tax ID, Charges, etc. It's 300,000 records long and 15-25 columsn wide. I need to search for key terms in the provider name like Ambulance, Fire Dept, Rescue, EMT, EMS, etc. Anything that resembles an ambulance service. Also, need to include abbreviations of them such as AMB, FD, or variations like EMT, E M T, EMS, E M S, etc. Each time I do a search, I have filter and apply an "N/A" flag.
    That's just one key term. I also have things like Dentists or DDS, Vision, Optomemtry and a dozen other Provider Types that need to be flagged as "N/A".
    Is this something that can be handled using BI? I have access to a BI group, but I need to understand more about the capabilities of what can be done. As an analyst, I'm having to deal with poor data inegrity. So, just cleaning up the file can be extremely taxing and cumbersome.
    Some insight would be very helpful. Thanks.

    I am not sure if you are looking for an explanation about different BI products? If so, may be this forum is not the place to get a straight answer.
    But, Information Discovery product suite might be useful in your case. Regarding the "large date set" you mentioned, searching and analyzing 300,000 records may not be considered a large data set at least in Endeca standards :).
    All your other requests, could also be very easily implemented using Endeca's product suite. Please reach out to Oracle's Endeca product team and they might guide you on how this product suite would help you.

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • Working with Large data sets Waveforms

    When collection data at a high rate ( 30K ) and for a long period (120 seconds) I'm unable rearrange the data due to memory errors, is there a more efficient method?
    Attachments:
    Convert2Dto1D.vi ‏36 KB

    Some suggestions:
    Preallocate your final data before you start your calculations.  The build array you have in your loop will tend to fragment memory, giving you issues.
    Use the In Place Element to get data to/from your waveforms.  You can use it to get single waveforms from your 2D array and Y data from a waveform.
    Do not use the Transpose and autoindex.  It is adding a copy of data.
    Use the Array palette functions (e.g. Reshape Array) to change sizes of current data in place (if possible).
    You may want to read Managing Large Data Sets in LabVIEW.
    Your initial post is missing some information.  How many channels are you acquiring and what is the bit depth of each channel?  30kHz is a relatively slow acquisition rate for a single channel (NI sells instruments which acquire at 2GHz).  120s of data from said single channel is modestly large, but not huge.  If you have 100 channels, things change.  If you are acquiring them at 32-bit resolution, things change (although not as much).  Please post these parameters and we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Need to load large data set from Oracle table onto desktop using ODBC

    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.

    hillelhalevi wrote:
    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.
    Use Oracle's free Sql Developer
    http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
    You can just issue a query like this
    SELECT /*csv*/ * FROM SCOTT.EMP
    Then just save the results to a file
    See this article by Jeff Smith for other options
    http://www.thatjeffsmith.com/archive/2012/05/formatting-query-results-to-csv-in-oracle-sql-developer/

  • 64-bit LabVIEW - still major problems with large data sets

    Hi Folks -
    I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
    Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
    Then enqueue:
    I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
    Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
    I have other CT imaging programs that are running very well with these large data sets.
    This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
    Sincerely,
    Don

    First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
    With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
    So I definitely noticed an improvement.
    I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
    I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
    data out of arrays to minimize data copies.  I expect I might have to
    consider this strategy now here as well.  Just a thought.
    What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
    Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
    Don

  • XML Solutions for Large Data Sets

    Hi,
    I'm working with a large data set (9 million records comprising 36 gigabytes) and am exploring the use of XML with it.
    I've experimented with a JDBC app (taken straight from Steve Muench's excellent <i>Oracle_XML_Applications</i>) for writing to CLOBS, but achieve throughputs of much less than 40k/s (the minimum speed required to process the data in < 10 days).
    What kind of throughputs are possible loading XML records from CLOBs into multiple tables (using server-side Java apps)?
    Could anyone comment whether XML is a feasible possibility for this size data set?
    Regards,
    Mike

    Just would like to identify myself (I'm the submitter):
    Michael Driscoll <[email protected]>.
    null

  • Just in case any one needs a Observable Collection that deals with large data sets, and supports FULL EDITING...

    the VirtualizingObservableCollection does the following:
    Implements the same interfaces and methods as ObservableCollection<T> so you can use it anywhere you’d use an ObservableCollection<T> – no need to change any of your existing controls.
    Supports true multi-user read/write without resets (maximizing performance for large-scale concurrency scenarios).
    Manages memory on its own so it never runs out of memory, no matter how large the data set is (especially important for mobile devices).
    Natively works asynchronously – great for slow network connections and occasionally-connected models.
    Works great out of the box, but is flexible and extendable enough to customize for your needs.
    Has a data access performance curve so good it’s just as fast as the regular ObservableCollection – the cost of using it is negligible.
    Works in any .NET project because it’s implemented in a Portable Code Library (PCL).
    The latest package can be found on nugget. Install-Package VirtualizingObservableCollection. The source is on github. 

    Good job, thank you for sharing
    Best Regards,
    Please remember to mark the replies as answers if they help

  • Accessing large data sets via UME

    NW 7
    What is the best way to access large user data sets via the UME?  Attribute mapping
    provides String[] for User Profiles but what is the best approach for larger user data sets?
    Say there is a list of user data exceeding 5 thousand records. What UME API/Approach is used to access this type of data.  I want to use UME API to access user data without being limited to 10 multi-valued String Array attributes? 
    Thanks

    NW 7
    What is the best way to access large user data sets via the UME?  Attribute mapping
    provides String[] for User Profiles but what is the best approach for larger user data sets?
    Say there is a list of user data exceeding 5 thousand records. What UME API/Approach is used to access this type of data.  I want to use UME API to access user data without being limited to 10 multi-valued String Array attributes? 
    Thanks

  • Best Version of SQL Server to Run on Windows 7 Professional for Large Data Sets

    My company will soon be upgrading my work PC from XP to Windows 7 Professional.  I am currently running SQL Server 2000 on my PC and use it to load and analyze data large volumes of data.  I often need to work with 3GB to 5GB of data, and
    have had databases reach 15GB in size.  What would be the best version of SQL Server to install on my PC after the upgrade?  SQL Server Express just won't cut it.  I need more than 2GB of data and the current version of DTS functionality to
    load and transform data.
    Thanks.

    Hi,
    Its difficult to say what would be best for you. You can install SQL Server 2012 standard edition because that is supported on Windows 7 SP1, enterprise edition is not supported. SQL Server 2012 express has now database limitation of 10G which does not includes
    File stream size and log file size but does not provide SSIS features.
    Just have a look at features supported by various editions of SQL Server it will help you in deciding better
    I inclination is towards SQL Server 2012 because its now with SP2 and more stable than 2014( my personal opinion)
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Loading Matrix Bound to UDO with Large Data Set

    Hello Experts,
    I have been looking on the forums for the best method out there to effectively load a Matrix that is bound to a User Defined Object (UDO).  In short, I will explain to you what I would like to do.  I have a form that has a matrix on it bound to a User Defined Object.  This matrix takes data stored in other UDO forms/tables and processes it to extract new information. 
    Unfortunately, the resulting dataset is quite large (up to 1000 rows).  I realize if this were just a "report" I could easily do this with a Grid.  I also realize if this were just a Matrix bound to a User Defined Table, I could bind it to a DataTable and perform the query that way.  However, since this is a Matrix bound to a DBDataSource (as I would like to have SAP handle any updates/finds) I believe my only options are to try and use a DBDataSource.Query method and try to work with Conditions. 
    The DBDataSource.Query method has not proven to be effective due to the complexity of the query and the multiple tables involved.  I have read from others on the forum that I could just load the matrix by temporarily databinding the matrix to a DataTable and then, after it is loaded, switch the databinding back to the DBDataSource but this does not work as it comes back with an error informing me (rightly so) that there are already rows in the matrix.
    One final option would be to use the User Interface (UI) to cycle through and update each cell of the matrix with the results of a recordset, but, as I said, this can be a large dataset and that could take hours (literally).
    In short, I was wondering if anyone out there can advise me on the most effective options I have.  Is there  a way to quickly load a matrix bound to a DBDataSource?  Is there someway I can load the matrix by binding it to a DataTable and then quickly move this information over to the DBDataSource (I already attempted this and the method I used was as slow as using the UI to update the Matrix)?  Are there effective ways to use the DBDataSource.Query method that I do not know much about (and cannot find many examples of how this functionality is truly used)?  Should I abandon the DBDataSource (though I believe this is the SAP preferred method) and, if so, is there another technique to appropriately update the database other than using DBDataSource?  Others have mentioned handling the updates to the database themselves but I am not sure what this means (maybe it means using SQL UPDATE/INSERT?).  Is there a ways to Flush matrix information to a DBDataSource if the DBDataSource was not used in the loading and is not currently bound to the matrix? 
    Sorry for the numerous amount of questions but thanks for the advise.

            Dim oForm As SAPbouiCOM.Form
            Dim creationPackage As SAPbouiCOM.FormCreationParams
            creationPackage = sbo_application.CreateObject(SAPbouiCOM.BoCreatableObjectType.cot_FormCreationParams)
            creationPackage.UniqueID = "MyFormID"
            creationPackage.FormType = "MyFormID"
            creationPackage.ObjectType = "UDO_TEST"
            creationPackage.BorderStyle = SAPbouiCOM.BoFormBorderStyle.fbs_Fixed
            oForm = sbo_application.Forms.AddEx(creationPackage)
            oForm.Visible = True
            oForm.Width = 300
            oForm.Height = 400
            Dim oItem As SAPbouiCOM.Item
            oItem = oForm.Items.Add("1", BoFormItemTypes.it_BUTTON)
            oItem.Top = 336
            oItem.Left = 5
            oItem = oForm.Items.Add("2", BoFormItemTypes.it_BUTTON)
            oItem.Top = 336
            oItem.Left = 80
    Now put and Edit box to DocEntry
            oItem = oForm.Items.Add("3", SAPbouiCOM.BoFormItemTypes.it_EDIT)
            oItem.Top = 5
            oItem.Left = 5
            oItem.Width = 100
            Dim oEditText As SAPbouiCOM.EditText = oItem.Specific
            oForm.DataSources.DataTables.Add("oMatrixDT")
            oItem = oForm.Items.Add("oMtrx1", SAPbouiCOM.BoFormItemTypes.it_MATRIX)
            oItem.Top = 20
            oItem.Left = 20
            oItem.Width = oForm.Width - 30
            oItem.Height = oForm.Height - 100
            Dim oMatrix As SAPbouiCOM.Matrix = oItem.Specific
            Dim oColumn As SAPbouiCOM.Column = oMatrix.Columns.Add("#", SAPbouiCOM.BoFormItemTypes.it_EDIT)
            oColumn.TitleObject.Caption = "#"
            oColumn = oMatrix.Columns.Add("oClmn0", SAPbouiCOM.BoFormItemTypes.it_LINKED_BUTTON)
            oColumn.TitleObject.Caption = "BP Code"
            Dim oLinkedButton As SAPbouiCOM.LinkedButton = oColumn.ExtendedObject
            oLinkedButton.LinkedObject = SAPbouiCOM.BoLinkedObject.lf_BusinessPartner
            ' Now bind Columns to UDO Objects in Add Mode
            oEditText.DataBind.SetBound(True, "@UDO_TEST", "DocEntry")
            oMatrix.Columns.Item("oClmn0").DataBind.SetBound(True, "@UDO_TEST1", "U_CARDCODE")
            oForm.DataBrowser.BrowseBy = "3"

  • Are Analytic Workspaces suitable for very large data sets?

    Hi all,
    I have made many different tests with analytic workspaces and i have used the different features (compression,composites...). The results especially for maintenance are disappointing.
    I have a star schema with 6 dimensions. The fact table has 730 million rows, the first dimension has 2,9 million rows and the other 5 dimensions have between 25 and 300 rows each.
    My conclusion is that Analytic Workspaces don't help in situations like mine. The time for maintenance is very very bad not to mention the time for aggregations. I even tried to populate the cube in parts( 90 million rows for the first population) but nothing change. And there are some other problems with storage and tablespaces ( I always get the message unable to extent TEMP tablespace. The size of it is 54Gb).
    Is there something i missing? Has anyone similar problem or different opinion?
    Thank you,
    Ilias

    A few other tips to add to Keith's excellent advice:
    - How many CPU's does your server have? The answer to this may help you decide the optimal level to partition at (in my experience DAY is too low and can cause different problems). What other levels does your time dimension have? Are you loading your cubes in parallel?
    - To speed up your load, partition your underlying fact table with the same granularity as your cubes and place an index on the field mapped to the partition dimension
    - Are you using 10.2.0.3? If so, be very careful with the storage data type you choose when creating your cubes. The default in 10.2.0.3 is NUMBER which has the capability of storing data to 38 significant figures. This usually exceeds what is required for most datasets. If your dataset allows you to use storage of 15 significant figures then you should create your cubes using the DECIMAL data type instead. This will use about one third of the storage space and significantly increase your build speeds (in my experience, more than 3 times faster)
    - Make sure you have preallocated enough permanent and temporary tablespaces for your build. Autoextending can be very time consuming.
    - Consider reducing the amount of aggregation you do in batch. It should not be necessary to pre-aggregate everything in order to get good query performance.
    Generally, I would say that the volume should not be a problem. A single dimension with 2.9 million values is fairly big and can be slow (in OLAP terms) to query but that should not be an obstacle to building it in the first place.
    Good luck!
    Stuart

  • Loading Large Data Set Times Out In APEX

    I am trying to load a large text file using APEX 3 I am using a 1g file which has about 50,000 rows. After about 5 minutes the browser times out. Any ideas nothing in the alert log so it is not database related. Here is the error in the Apache Log
    mod_plsql: Long running URL [pls/apex/wwv_flow.accept] timed out

    Steve,
    The Apache process is timing out. Most likely this is set to 300 in your httpd.conf (the default).
    You can extend it to more than 300, but if I were in your shoes and you had the proper access, I'd use some other means to load 1GB of data (e.g., SQL*Loader).
    Joel

  • Exporting and Importing Large data set of approx 300,000 rows

    Hi,
    I have a table in db 1(approx 10 columns) and want to copy all the rows(approx. 300,000) from this table with a simple where clause to the same db table in db 2. Both databases are on unix.
    I am executing this from a laptop with windows XP remotely connected to my office network.
    Could someone let me know what is the best way to do this.
    Thanks

    331991 wrote:
    Thanks for the detailed instructions. I however have some limitations:
    1. Both schemas(from and to) are on the same database server. I am the schema owner for both the schemas A logical impossibility in Oracle. A schema is defined by its owner, therefor you cannot have two schemas owned by the same owner. SCHEMA1 is, by definition, owned by user SCHEMA1. And SCHEMA2 is, by definition, owned by user SCHEMA2.
    but don't have rights to create dir's on the server.Then it is of no value to create a directory object in the database. The database's directory object is nothing but an alias to refer to an actual, existing directory on the host server.
    2. I don't want to copy all data from db1 to db2. I want to copy data where id > 5000 in the id column. Also in table in db1 has 10 columns and table in db2 has 15 columns with 10 from db1 with same data types and 5 more .
    How can I export the data to my laptop from db1 and then import into db2 in this scenario?
    You don't. Create a dblink and copy the data directly from the source to the target:
    INSERT INTO TARGET_TABLE VALUES
      (SELECT COLA, COLB, COLC, NULL, NULL, NULL, SYSDATE
       FROM SOURCE_TABLE@SOURCE_LINK
       WHERE ID > 5000);
    Also I am using Oracle 10G.
    Thanks

  • Looping is very slow on large data set

    Hi All,
    I need suggestion in optimizing the below looping scenario .
    Sample data in the table , For easy understanding I kept only 4 columns actual source has  56 columns
    Input :
    #Final
    RwNum
    JobSource
    RuleName
    PackageType
    1
    Presubmission
    GameRating
    Xap
    2
    PostSubmission
    GameRating
    Xap
    3
    Presubmission
    GameRating
    NULL
    4
    Presubmission
    TCRRule
    Xap
    5
    PostSubmission
    NULL
    Xap
    6
    Submission
    NULL
    Xap
    I need to iterate row by row in the above table  to compare the data with the rest of the table . i.e.  First get the data for RwNum =1 and compare with all the other rows in the table (i.e. Rwnum  2,3,4,5,6 )  to merge that data in below
    output format  and repeat the same process for all the rest of rows.
    Expected  Output :
    #Final
    RwNum
    JobSource
    RuleName
    PackageType
    1
    Presubmission
    GameRating
    Xap
    2
    PostSubmission
    GameRating
    Xap
    4
    Presubmission
    TCRRule
    Xap
    6
    Submission
    NULL
    Xap
    In the final output , RwNum 1 is the merged result of  RwNum 1 & 3  . Similarly  RwNum 2 in the output is the merged result of  2 & 5   and other records remain As Is.
    So the query I wrote is below  :
    While (@TopRwNum <=6
    Begin
    While (@InnerRwNum <= 6)
    Begin
    SELECT
    ---- Columns list
    FROM          #final Fr
    JOIN #final Ne
    ON 
                  Fr.RwNum
    =@TopRwNum AND Ne.RwNum
    =@InnerRwNum
    AND @TopRwNum <> @InnerRwNum     
    Where
    --- Conditional logic to compare  and merge the data 
    The above query executed in 10 secs
    The above query is working when the count of rows is small , but if the #final has  ~1000 rows then the no of times the above code
    has to iterate is 1000 *1000  ~= 1000000 times ( as there are outer and inner loops need to run for 1000 times each ) . Then the above
    while loop logic is taking around 20 mins
    Can we optimize the above code to make  iteration faster ?  . Any ideas would be greatly appreciated .

    Hi All ,
    Thanks for your replies .Sorry if haven't followed the forum rules , please forgive me . I kept only images as they help to understand the scenario
    better . Now here i am posting the DDL scripts and also the expected output with some more columns.( I have used table variables here)
    DECLARE  @Tab TABLE (
    RwNum  INT,
    JobParentID UNIQUEIDENTIFIER,JobSource      NVARCHAR(MAX),
    PackageType NVARCHAR(MAX),
    UpdateType  NVARCHAR(MAX),
    IsAutoPassed NVARCHAR(256),
    IsCanceled NVARCHAR(256),
    IsSkipped NVARCHAR(256),
    Result NVARCHAR(256),    
    Fired NVARCHAR(256),     
    RuleName NVARCHAR(256)
    INSERT INTO @Tab
    SELECT 1,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PreSubmissionSource','Xap',         'FullUpdate','','','FALSE','Pass','FALSE','RuleValidationResult^XboxLive'
    UNION
    SELECT 2,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PostSubmission','Xap',         'PartialUpdate','FALSE','','TRUE','Failed','FALSE','RuleValidationResult^GameCategoryNameChange'
    UNION
    SELECT 3,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PreSubmissionSource','Xap',         '','TRUE','TRUE','','Pass','FALSE','RuleValidationResult^XboxLive'
    UNION
    SELECT 4,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PreSubmissionSource','Xap',         '','TRUE','TRUE','','Pass','FALSE','RuleValidationResult^CumulativeDownload'
    UNION
    SELECT 5,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PostSubmission','Xap',         'PartialUpdate','FALSE','','TRUE','Failed','FALSE','PreinstalledPackage'
    UNION
    SELECT 6,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PreSubmissionSource','Xap',         'PartialUpdate','TRUE','TRUE','','Pass','FALSE','RuleValidationResult^CumulativeDownload'
    UNION
    SELECT 7,'7ca42851-c3d2-42da-b5b9-d40392ae24fb','PreSubmissionSource','Xap',         'FullUpdate','','','','Pass','','RuleValidationResult^XboxLive'
    UNION
    SELECT 1,'2004235d-af05-4e29-ab8d-50b80a088dd4','PreSubmissionSource','Xap',         'FullUpdate','TRUE','TRUE','','Pass','FALSE','RuleValidationResult^CumulativeDownload'
    UNION
    SELECT 2,'2004235d-af05-4e29-ab8d-50b80a088dd4','PreSubmissionSource','Xap',         'FullUpdate','TRUE','','FALSE','','FALSE','RuleValidationResult^CumulativeDownload'
    SELECT * FROM @Tab  ORDER BY 2 DESC ,1  ASC
    Original source table doesn't RwNum column . In order for me to iterate row by row on the above table i have created "RwNum" column . This column
    has unique ID for every row of a JobIDParent. RwNum column is generated as below 
    ROW_NUMBER() OVER( ORDER BY JobIDParent) AS  RwNum,
    The output should be below  :
    DECLARE  @Output TABLE (
    RwNum  INT,
    JobParentID UNIQUEIDENTIFIER,JobSource      NVARCHAR(MAX),
    PackageType NVARCHAR(MAX),
    UpdateType  NVARCHAR(MAX),
    IsAutoPassed NVARCHAR(256),
    IsCanceled NVARCHAR(256),
    IsSkipped NVARCHAR(256),
    Result NVARCHAR(256),    
    Fired NVARCHAR(256),     
    RuleName NVARCHAR(256)
    INSERT INTO @Output
    SELECT 1,'7ca42851-c3d2-42da-b5b9-d40392ae24fb',    'PreSubmissionSource',    'Xap',         'FullUpdate','TRUE','TRUE','FALSE','Pass','FALSE','RuleValidationResult^XboxLive'
    UNION
    SELECT 2,'7ca42851-c3d2-42da-b5b9-d40392ae24fb',    'PostSubmission', 'Xap',         'PartialUpdate','FALSE','','TRUE','Failed','FALSE','RuleValidationResult^GameCategoryNameChange'
    UNION
    SELECT 5,'7ca42851-c3d2-42da-b5b9-d40392ae24fb',    'PostSubmission', 'Xap',         'PartialUpdate','FALSE','','TRUE','Failed','FALSE','PreinstalledPackage'
    UNION
    SELECT 4,'7ca42851-c3d2-42da-b5b9-d40392ae24fb',    'PreSubmissionSource',    'Xap',         'PartialUpdate','TRUE','TRUE','','Pass','FALSE','RuleValidationResult^CumulativeDownload'
    UNION
    SELECT 1,'2004235d-af05-4e29-ab8d-50b80a088dd4',    'PreSubmissionSource',    'Xap',         'FullUpdate','TRUE','TRUE','FALSE','Pass','FALSE','RuleValidationResult^CumulativeDownload'
    SELECT * FROM @Output  ORDER BY 2 DESC ,1  ASC
    Merge rules to generate the above Output :
    All the below rules should
    be satisfied to merge two rows 
    1) We only need to merge the data that is related to the same JobIDParent
    2) Data in column for any two merging rows should not conflict .
    I.e. Data in a column of a merging row should not be different to the data in the same column for the other merging row(only if both rows are having not NULL value in
    that column )
    3) we can merge two rows ,only if the data in all the columns satisfy any of the below cases 
    case i) If
    the data in the column of a row is having null/EmptySpace & the data in the same column for the other row is a valid value i.e. non empty or not null
    case ii) Data
    in a column of a row is equal to the data in the same column of the other row .
    Output analysis by applying above rules:
    In the @Output  analysis for JobIDParent '7CA42851-C3D2-42DA-B5B9-D40392AE24FB'
    i)RwNum =1 is formed as a result of merge using the above rules from RwNum 3 & RwNum 7 .First the RowNum 1
    and 3 are merged and the output  of these two rows are merged with RwNum 7
    ii) RwNum =2 is not merged with any of the rows in fact table , as the no row in the table satisfy all the merge
    rules with RwNum =2
    iii) RwNum = 4 is formed as a result of merge using the above rules from RwNum 6 .
    iv) RwNum =5 is not merged with any of the rows in fact table , as the no row in the table satisfy all the
    merge rules with RwNum =5
    In the @Output  analysis for JobIDParent '7CA42851-C3D2-42DA-B5B9-D40392AE24FB'
    i)RwNum =1 is formed as a result of merge using the above rules from RwNum 2.
    In this way we want the rows to be merged on the table
    Sorry if my earlier post is not understandable .
    Thanks in advance.

Maybe you are looking for

  • Auto Populate  Header Text in-ERS(MRRL)

    HI Guru's, Iam new to MM Using with T.Code:MRRL  Evaluate Reciept Settlement( ERS) Logical INvoice Verification. posting FI Document..my client wants to auto polulate PO Item Text (ekpo-txz01) into BSEG-SGTXT.for all line itmes.. is there any config

  • Trouble with in app purchases

    I am trying to make a purchase within an app game, and everytime I try to make it go through, the purchase cannot be completed. I checked for internet connection, it is connected. I turned it off and cam back but the purchase will not go through. I k

  • Do I need to add same amt of RAM in both banks?

    I want to add RAM (I'm hoping it helps very large video files not hesitate). Does the amount of RAM I put in my empty upper bank be the same size (512MB) as my lower bank? I heard this is a consideration if you have "dual channel" RAM. I have no idea

  • How to run analyze schema

    Hi Guys . Please how do you RUN analyze schema or database or table? I have ran gather all schema statistics and now my told to run analyze schema by my boss .. How do you do this please? Platform is 11.5.9 OS: solarais Thanks in advance

  • New prompt to upgrade Live Street View which I recently downloaded to replace Google Street View lost with my IOS6. There's 35 pages of TOA. has anyone tried the upgrade

    Recently downloaded Live Street View to replace Google Street View lost in IOS6 "upgrade." It's OK, but images are 3" & don't zoom. Today I received a prompt to upgrade Live Street View, but it is preceded by 35 pages of TOA from Apple. I'm suspiciou