Best Query Approach

I have a scenario to display data from 2 tables combined. First table Named 'DayTable' consists of daily plan and actual. Second table named 'MonthTable' consists of Monthly plan and actual. I need to display last 6months data and the current month daily
data.
My output is like
For this i wrote the query in 2 methods. One is using temp tables and another with out temp table. Please suggest the best apporach for this. Details are below
Approach 1: Using Temp Table
Declare @startdate date = CONVERT(DATE, DATEADD(dd, -DAY(DATEADD(MONTH, 0, GETDATE())) + 1, DATEADD(MONTH, 0, GETDATE())))
Declare @endDate date = DATEADD(DAY, -DAY(DATEADD(MONTH, 1, GETDATE())), DATEADD(MONTH, 1, GETDATE()))
CREATE TABLE #TEMP
PlanDate NVARCHAR(100),
[PastTrend - Plan] INT,
[PastTrend - Actual] INT,
[Current - Plan] INT,
[Current - Actual] INT,
;With cte
as
Select @startdate sDate
Union All
Select DATEADD(day,1,sDate) From cte where DATEADD(day,1,sDate) <= @endDate
INSERT INTO #TEMP
SELECT
REPLACE(CONVERT(CHAR(6), A.sDate, 106),' ',' - ') PlanDate
,NULL AS [PastTrend - Plan]
,NULL AS [PastTrend - Actual]
,SUM(B.PlanQuantity) AS [Current - Plan]
,SUM(B.Actual) AS [Current - Actual]
FROM cte A
LEFT OUTER JOIN DayTable B
ON A.sDate = CONVERT(DATE,B.PlanDate)
GROUP BY A.sDate
SELECT
FROM
SELECT
CONVERT(CHAR(3), datename(month,PlanMonth)) + ' ' + RIGHT(CONVERT(VARCHAR(4), YEAR(PlanMonth)), 2) AS PlanDate
,SUM(PlanQuantity) AS [PastTrend - Plan]
,SUM(Actual) AS [PastTrend - Actual]
,NULL AS [Current - Plan]
,NULL AS [Current - Actual]
FROM
MonthTable
WHERE CONVERT(DATE, PlanMonth) >= CONVERT(DATE, DATEADD(dd, -DAY(DATEADD(MONTH, 0, GETDATE())) + 1, DATEADD(MONTH, -6, GETDATE())))
group by PlanMonth
UNION ALL
SELECT
PlanDate
,[PastTrend - Plan]
,[PastTrend - Actual]
,[Current - Plan]
,[Current - Actual]
FROM
#TEMP
) T1
DROP TABLE #TEMP
When i use Actual execution plan, Query Cost (relative to the batch) : 90%
Approach 2: With out temp Table
Declare @startdate date = CONVERT(DATE, DATEADD(dd, -DAY(DATEADD(MONTH, 0, GETDATE())) + 1, DATEADD(MONTH, 0, GETDATE())))
Declare @endDate date = DATEADD(DAY, -DAY(DATEADD(MONTH, 1, GETDATE())), DATEADD(MONTH, 1, GETDATE()))
;With cte
as
Select @startdate sDate
Union All
Select DATEADD(day,1,sDate) From cte where DATEADD(day,1,sDate) <= @endDate
SELECT
T1.PlanDate
,T1.[PastTrend - Plan]
,T1.[PastTrend - Actual]
,T1.[Current - Plan]
,T1.[Current - Actual]
FROM
SELECT
A.sDate AS OriginalDate
,REPLACE(CONVERT(CHAR(6), A.sDate, 106),' ',' - ') PlanDate
,NULL AS [PastTrend - Plan]
,NULL AS [PastTrend - Actual]
,SUM(B.PlanQuantity) AS [Current - Plan]
,SUM(B.Actual) AS [Current - Actual]
FROM cte A
LEFT OUTER JOIN DayTable B
ON A.sDate = CONVERT(DATE,B.PlanDate)
GROUP BY A.sDate
UNION ALL
SELECT
PlanMonth AS OriginalDate
,CONVERT(CHAR(3), datename(month,PlanMonth)) + ' ' + RIGHT(CONVERT(VARCHAR(4), YEAR(PlanMonth)), 2) AS PlanDate
,SUM(PlanQuantity) AS [PastTrend - Plan]
,SUM(Actual) AS [PastTrend - Actual]
,NULL AS [Current - Plan]
,NULL AS [Current - Actual]
FROM
MonthTable
WHERE CONVERT(DATE, PlanMonth) >= CONVERT(DATE, DATEADD(dd, -DAY(DATEADD(MONTH, 0, GETDATE())) + 1, DATEADD(MONTH, -6, GETDATE())))
group by PlanMonth
) T1
ORDER BY T1.OriginalDate
Here Query Cost (relative to the batch) : 100%
So can you suggest the best one. Actually i wrote the second method to avoid temp tables. If any failed in the query after create temp table, it will deleted. To avoid like problems and for simplicty i prefer the second. one. But now i am confused
which is the best one in the performance wise also. 

p>I will try to chnage as per your suggestions. Actually here the presentation is fully dynamic and the header is always used from the column name. So only i try to give the name with hyphen.
I added the schema design of the table. The application uses multilingual so we use nvarchar. The tables are aggregated tables. The data populated daily or monthly using sql job
CREATE TABLE [dbo].[MonthPlan](
[IPCMonthlyVsActualId] [int] IDENTITY(1,1) NOT NULL,
[IPCPlanMonth] [datetime] NOT NULL,
[ModelId] [int] NOT NULL,
[ModelName] [nvarchar](50) NOT NULL,
[PlanQuantity] [int] NOT NULL,
[Actual] [int] NOT NULL,
[DifferenceQuantity] [int] NOT NULL,
[ReasonCode] [nvarchar](max) NULL,
[Remarks] [nvarchar](max) NULL,
[Createdby] [int] NOT NULL,
[CreatedDate] [datetime] NOT NULL,
[CreatedIP] [nvarchar](30) NOT NULL,
[Approval] [int] NULL,
[UnApproval] [int] NULL,
[ModelTypeName] [varchar](100) NULL,
[ModelCategoryName] [varchar](100) NULL,
[ModelColorName] [varchar](100) NULL,
[ModelGroupName] [varchar](100) NULL,
[ModelLocationName] [varchar](100) NULL,
[ModelTypeId] [int] NULL,
[ModelCategory] [int] NULL,
[ModelColor] [int] NULL,
[ModelGroup] [int] NULL,
[ModelLocation] [int] NULL
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[DayTable](
[IPCDailyVsActualId] [int] IDENTITY(1,1) NOT NULL,
[IPCPlanDate] [datetime] NOT NULL,
[ModelID] [int] NOT NULL,
[ModelName] [nvarchar](50) NOT NULL,
[PlanQuantity] [int] NOT NULL,
[Actual] [int] NOT NULL,
[DifferenceQuantity] [int] NOT NULL,
[ValuesetparameterId] [int] NULL,
[ReasonCode] [nvarchar](20) NULL,
[Remarks] [nvarchar](200) NULL,
[CreatedBy] [int] NOT NULL,
[CreatedDate] [datetime] NOT NULL,
[CreatedIP] [nvarchar](30) NOT NULL,
[Cell1Cell2Map] [int] NULL,
[Cell3Actual] [int] NULL,
[FB] [int] NULL,
[Remarks2] [nvarchar](max) NULL,
[Remarks3] [nvarchar](max) NULL,
[Remarks4] [nvarchar](max) NULL,
[Remarks5] [nvarchar](max) NULL,
[Remarks6] [nvarchar](max) NULL,
[Remarks7] [nvarchar](max) NULL,
[Remarks8] [nvarchar](max) NULL,
[Remarks9] [nvarchar](max) NULL,
[Remarks10] [nvarchar](max) NULL,
[Remarks11] [nvarchar](max) NULL,
[Remarks12] [nvarchar](max) NULL,
[UpdatedBy] [int] NULL,
[UpdatedDate] [datetime] NULL,
[UpdatedIP] [nvarchar](30) NULL,
[ClosureFlag] [bit] NULL,
[UnApproval] [int] NULL,
[ModelTypeName] [varchar](100) NULL,
[ModelCategoryName] [varchar](100) NULL,
[ModelColorName] [varchar](100) NULL,
[ModelGroupName] [varchar](100) NULL,
[ModelLocationName] [varchar](100) NULL,
[ModelTypeId] [int] NULL,
[ModelCategory] [int] NULL,
[ModelColor] [int] NULL,
[ModelGroup] [int] NULL,
[ModelLocation] [int] NULL
) ON [PRIMARY]

Similar Messages

  • What is best performing approach to report building in my case?

    Hi all,
    I want to know what is the best performing approach in the case of an overload of the system,
    understood as large number of concurrent operations.
    Each operation is a query that, in most cases, returns a large amount of data.
    I am interested in the approach that not create bottlenecks and slow down, blocks for a long time the system.
    The alternatives that I would like more information about are:
    1) reports built with the JDBC (JNDI) specifying "java:jdbc/xxxxdatasource"
    (taken from the oracle-ds.xml's jndi-name tag) as "Connection name (optional)"
    with the query written into the rpt file and runned by Crystal Reports that I think makes a direct connection to DB
    and integrated into Java with Java Reporting Component.
    This approach has also threads limits, depending on the version of the report engine.
    2) reports built with "Field definition only" with the query written and runned into the my application that call the report only through the resultSet to be displayed
    (reportClientDoc.getDatabaseController().setDataSource(resultSet, tableName , tableName);)
    My concern with this approach is that it seems to require loading all results into memory
    and generating the report in one big step.
    Is there a way to avoid this? Some-how to page through report data?
    I've also read that Crystal Reports can work with any data provider that implements ResultSet.
    Is this true? If so, could I create my own custom ResultSet implementation that would let me
    page through my results without loading everything into memory at-once?
    If possible, please point me to the documentation for this approach.
    I haven't been able to find any examples.
    If there is a better approach that I haven't mentioned, please let me know.
    Thanks in advance

    The first option is the best one for performance.  The only time you should use result sets is when you need to do runtime manipulation of the data through your application and is not acheivable in a stored procedure.

  • Best possible approach to add fields in an FPM WDA application

    Hi guys,
    Our SRM 7 has FPM based WDA views. Extending the customer include by appending fields alone will not work for us as some fields are dropdowns that will perform actions and display/hide other custom fields.
    I can think of the enhancement framework to enhance the views. Since i am knew to WDA and FPM I am thinking there is another easier way to do this.
    What is the best possible approach to fulfil this requirement?
    Regards
    Ali

    Thanks to responses from Chaitanya and Thomas.
    I managed to create additional fields(with action assigned) in the standard shopping cart screens.
    I extended the SRM header structure, added the custom fields of this structure in the header node of the context, and by binding the property of these fileds to the attributes, I don't have to code for the fields enable/disable when the transation is called in edit/create/display mode.
    However, I am still struggling with the dropdown(with action). They remain enabled, even when the data has been saved or the transaction is in display mode. Hints?
    Regards, Ali

  • Best practice approach for seperating Database and SAP servers

    Hi,
    I am looking for a best practice approach/strategy for setting up a distributed SAP landscape i.e separating the database and sap servers. If anyone has some strategies to share.
    Thanks very much

    I can imagine the most easiest way:
    Install a dialog instance on a new server and make sure it can connect nicely to the database. Then shut down the CI on the database server, copy the profiles (and adapt them) and start your CI on the new server. If that doesn't work at the first time you can always restart the CI on the database server again.
    Markus

  • Which would be the best NAC approach with thin clients up today?

    Hello guys,
    Which would be the best NAC approach using CAM/CAS infrastructure for thin clients? I guess nac agent is still not supported on thin clients/virtual desktops right?
    Would mac address authentication be the best option?
    Regards,
    Emilio

    Hello guys,
    Which would be the best NAC approach using CAM/CAS infrastructure for thin clients? I guess nac agent is still not supported on thin clients/virtual desktops right?
    Would mac address authentication be the best option?
    Regards,
    Emilio

  • Choosing the best query

    Hi there,
    Can some one help me on selecting the best query from the following two queries:
    QUERY 1:
    select p.name, d.descr
    from table1 p, table2 d
    where p.cod_table1 = d.cod_table2(+) and d.dom(+) ='T';
    ID Query Plan
    0 SELECT STATEMENT Cost = 5
    1 HASH JOIN OUTER
    2 TABLE ACCESS FULL table1
    3 TABLE ACCESS BY INDEX ROWID table2
    4 INDEX RANGE SCAN DDOM_DOM_FK                     
    QUERY 2:
    select p.name, (select d.descr from table2 d where p.cod_table1 = d.cod_table2 and d.dom='T') as descr
    from table1 p;
    ID Query Plan
    0 SELECT STATEMENT Cost = 2
    1 TABLE ACCESS FULL table1                    
    I'm a bit surprise about the explain result of the second query, because there is no reference to table2 access. What's happening ?
    Tks in advance for any tip,
    Helena.

    I've noticed that issue with the query plan for those types of querys (in 8.1.7. It may be better in a lter database).
    I think the queries will take the same time.
    However if you are basing a view on the query, the second would be better as the lookup table will only be accessed if the descr is selected from the view, and performance will therefore be better for queries not accessing the descr column.
    [I asked pretty much the same question on asktom.oracle.com]

  • Looking for a best query for multiple IF Else statement in a single select

    Hi
    I want to run multiple IF Else statements in a single select SQL, each statement is one SQL operating on the same table, what is the best way to write this select SQL query ?
    If it is PL/SQL, when i get the result from the first IF statement I will skip the remaining execution, and so on... Can any one help me on this.
    Thanks in advance !!

    965818 wrote:
    I Apologize, the information i have given might not be enough.
    This is my scenario,
    I am selecting set of rows from the table for the employee id. After selecting those records,
    i need to go through the result list and check the condition 1, if it is met, i will return that employee record.
    If that condition 1 is not met, then i need to go through the condition 2. If that is met, i will return that record.
    Like wise, i have four conditions.
    I am trying to achieve this in a single sql. If i am not clear, please let me know.Not fully clear yet, but the picture is better already. The thing with SQL is that you should stop thinking procedurally. Instead think in data sets.
    For example if the task is:
    Find all managers that work in sales.
    Procedural thinking would work like this:
    pseudo code
    Loop over all employees that work in sales
       for each row
           check if it is a manager
               if manager
                  then return record
               else
                  do nothing
               end
    end loopThinking in datasets will result in a different logic
    pseudo code
    select all employees
    where department = SALES
    and job = MANAGERThis advantage here is that all the "Do nothing" loops are not needed. Those are already eliminated by the database.
    So what is needed to help you? Give the full picture. What is your task that you try to solve. From a business perspective.

  • Best as3 approach for this website...

    Hi everyone!
    This post will be a little bit long and I thank you in advance for the patience and help. Sorry for any (or all) English errors. I'm brazilian.
    Well, I want to reconstruct a website I have but now using as external .swf and .as files as I can. Also I need to change the way it works.
    I have problems with loaders but I'm decided to dive into the subject and put it into practice here, for the first time.
    The website will be really simple and here I have a scheme I just made to help you understand what are my doubts.
    How this is going to work?
    - When the user clicks a button, the current section will slide out (maybe unload?) while  the new section slide in. This is the basic functionality.
    - The website will have a home page when the user clicks the company logo.
    - In the scheme you see all 4 sections but only one will be visible. The others will be hidden by a  mask.
    To start, I guess the best thing I can do is built each section in a separeted .fla file to have 4 external .swfs to load.
    Then I thought about loading all 4 sections inside a sprite container. Then, "onSectionClick" slide the sprite to the right X position to show the correspondent section.
    Now, what's the best approach to achieve that? The obvious is that each section should load only when the user clicks a button. Should each seaction has its own loader? When is the user going to see the message "loading section" or progress bar? Should I slide the section in onLoadingComplete or should I slide it in and show the message/progress bar? Should I unload the current section when that slide out?
    I also thought about working with two sprites. One to loadSection and slideIn and another to load another section. Then, onSectionClick the currentSprite slideOut while the other slideIn and on background the first sprite unloads its content, change its X position and get ready to load another section when requested.
    I think that's all.
    I have this thing working but it's terrible because all sections load at once and there's no loading progress or whatsoever.
    And thanks again!

    thanks kglad!
    So every section will become a distinct part of the site.
    I was really thinking about making every section an external .swf instead of exporting movieClips from stage (linkage option). Which one is better?
    As I said, Andrei helped me with the loading process. Actually he's still helping me but with what he already taught me I think I'll be able to do the loading thing.
    Anyway, to make my life easier, imagine that all sections will tween to stage from the right side of it... to do that I'll have an onLoad method that will run when the loading process has finished. This method will call other two methods, slideOut and slideIn.
    slideIn will only bring the loaded .swf loader on stage (from the right side) while the slideOut will push the current .swf out of stage (to the left side), unload it and finally be positioned at the other side of the stage (right) to receive the next section when requested?
    Is this the best way to do this?
    I think this is so confusing that not even you will understand what I'm trying to say.

  • Best Possible Approach : Service Broker or replication?

    I have a business scenario. I have an application whose DB resides on SQL Server A. My application resides on SQL server B. Both SQL servers are on the same network. Server A has a table that is being constantly updated via a web application. We want that
    table to be on Server B. At present it is updated nightly from Server A to Server B.
    We want the changes to be replicated to server B in almost real time. 30 minutes delay could work.
    Now considering this business case what would eb my best options:
    1. Implement CDC on server A for the interested table ( Please note that CDC is only for 1 table, I'm not sure if this is possible) . Bring the table from Server A once to Server B using a Linked Server. Then use the CDC from server A to Merge and Update
    on table on Server b using a SP and Schedule this SP after 30 minutes.
    OR
    2. Use CDC and Service combination.
    OR
    3. use Replication.
    OR  ...???
    Any help will greatly be appreciated.

    Snapshot Replication is OK solution here.  You would need to set the snapshot period to 30 min or less. If you want changes to get populated immediately use transactional replication. Service Broker is the best approach since you wouldn't
    have to maintain infrastructures associated with establishing replication facility. But use of Service Broker will require some expertise. Also take a look at this product called SnipeDB framework SnipeDB.com SnipeDB framework would not replicate the
    table, but would allow any subscriber application to get an update on any change in your data.

  • Looking for best design approach for moving data from one db to another.

    We have a very simple requirement to keep 2 tables synched up that live in 2 different databases. There can be up to 20K rows of data we need to synch up (nightly).
    The current design:
    BPEL process queries Source DB, puts results into memory and inserts into Target DB. Out of memory exception occurs. (no surprise).
    I am proposing a design change to get the data in 1000 row chunks, something like this:
    1. Get next 1000 records from Source DB. (managed through query)
    2. Put into memory (OR save to file).
    3. Read from memory (OR from a file).
    4. Save into Target DB.
    Question is:
    1 Is this a good approach and if so, does SOA have any built in mechanisms to handle this? I would think so since I believe this is a common problem - we don't want to reinvent the wheel.
    2. Is it better to put records into memory or writing to a file before inserting into the Target DB?
    The implementation team told me this would have to be done with Java code, but I would think this would be out of the box functionality. Is that correct?
    I am a SOA newby, so please let me know if there is a better approach.
    Thank you very much for your valued input.
    wildeman

    Hi,
    After going through your question, the first thing that came to my mind is what would be the size of the 20K records.
    If this is going to be huge then even the 1000 row logic might take significant time to do the transfer. And I think even writing it to a file will not be efficient enough.
    If the size is not huge then probably your solution might work. But I think you will need to decide on the chunk size based on how well your BPEL process will work. Possible you can try different size and test the performance to arrive at an optimal value.
    But in case the size is going to be huge, then you might want to consider using ETL implementations. Oracle ODI does provide such features out of the box with high performance.
    On the other hand, implementing the logic using the DBAdapter should be more efficient than java code.
    Hope this helps. Please do share your thoughts/suggestions.
    Thanks,
    Patrick

  • What is the best MVC approach for designing Swing application

    Hi...
    I am designing an client/server application. In this I am using java swing for the front-end development. We decided on using MVC approach for the front-end design. I found that there are more than one approachs for designing in MVC.
    Which is the best way to model. To create your model taking View into consideration or Creating the Model taking the tables into consideration.
    Can anybody give help me out in this this is urgent. Thanks
    sai

    I'm not sure what you are asking, so I'll just ramble a bit and hope it helps.
    Create your model taking the view and the data into consideration. :-)
    Design a class to hold instances of the rows in your database that correspond (more or less) to the structure of the data. Add any needed methods to that data object to support the data that you might need for the view.
    For example, suppose your database stores a name in two columns (firstname and lastname). You pull that data from the database into a collection of Person objects into your model. Your Person class has two fields (and set/get methods) to hold that data. Now, in your view, you want to display the name as one string, so add a method to the Person object to get the full name (just by concatenating the two data elements).
    You have to take everything into consideration. How you are going to view the data as well as how it might be structured.

  • MDS Best Practice Approach - Sample HR Scenario

    Thanks for taking to time to read my MDS requirement...just looking for a better way to go about it.
    Here is the requirement: 
    Every month CEO releases an excel list of approved employment positions that can be filled to the HR.   The HR dept wants to be able to add positions that CEO approves and remove positions that the CEO feels are
    no longer necessary.  The recruiting group wants to track/modify this master list of positions per the CEOs discretion and assign employees to potentially each position as people are hired/terminated.
    The HR data steward must be able to:
    -when a position is filled, must be enabled to assign employees to the positions for org chart reporting
    -they need the ability to assign/reassign parent child relationships for any position i.e. the Director Position manages multiple Manager positions which manage multiple Register Clerk positions.
    I am new to MDS and am initially not sure how to approach this problem...do I create one entity for 'Positions' and another for 'employees' ?   I'm thinking with that approach I can create employee as an domain based attribute for Position, then
    create a derived Hierarchy for the Position parent/child relationships...just wondering if this is a good approach.
    Are there other things I should be taking into consideration?  Thanks!

    If your Material list document is not excessively long it probably wouldn't be too much overhead to add a few extra columns using the CalculatedColumn action block.  These extra columns, even though the number would be the same for all rows could contain a column for each of your aggregated functions.
    Then in your iGrid just set the Column Width for these addtional fields to zero and upon UpdateEvent of the grid you could javascript them from Row number 1 to your desired html elements, etc.

  • Best query to get the latest version

    Hi,
    I have a table as below:
    col1 col2 col3.... col_ver col7
    I have a combination od values for col1 to col6. I have values in col7 which is different for each value in col_ver. col_ver is a version column.
    For a particular combination (col1 to col6) I want to get the rows with the maximum version. Can anyone suggest the best sql for this ?
    I have 2 options as below, please suggest from performance point of view which would be better (i don't have enough rows to test) and if there is a any other better way
    option 1:
    SELECT col1, col2, col3, col4, col5, col6, col_ver, col7
    FROM
    (SELECT col1, col2, col3, col4, col5, col6, col_ver, col7, MAX(col_ver) OVER (PARTITION BY col1, col2, col3, col4,
    col5, col6) MVER
    FROM tabA )
    WHERE col_ver = MVER;
    option2:
    SELECT col1, col2, col3, col4, col5, col6, col_ver, col7
    FROM tabA A
    WHERE col_ver = (SELECT MAX(col_ver) FROM tabA B
    WHERE
    B.col1 = A.col1
    AND B.col2 = A.col2
    AND B.col3 = A.col3
    AND B.col4 = A.col4
    AND B.col5 = A.col5
    AND B.col6 = A.col6);
    Thanks a lot in adavnce

    SQL> with t as
      2  (select 'a1' col1, 'b1' col2, 'c1' col3, 'd1' col4, 1 col_ver,  100 col7
      3     from dual
      4   union all
      5   select 'a1' col1, 'b2' col2, 'c1' col3, 'd1' col4, 1 col_ver,  101 col7
      6     from dual
      7   union all
      8   select 'a1' col1, 'b2' col2, 'c2' col3, 'd1' col4, 1 col_ver,  102 col7
      9     from dual
    10   union all
    11   select 'a1' col1, 'b2' col2, 'c2' col3, 'd2' col4, 1 col_ver,  103 col7
    12     from dual
    13   union all
    14   select 'a2' col1, 'b1' col2, 'c1' col3, 'd1' col4, 1 col_ver,  104 col7
    15     from dual
    16   union all
    17   select 'a2' col1, 'b2' col2, 'c1' col3, 'd1' col4, 1 col_ver,  105 col7
    18     from dual
    19   union all
    20   select 'a2' col1, 'b2' col2, 'c2' col3, 'd1' col4, 1 col_ver,  106 col7
    21     from dual
    22   union all
    23   select 'a2' col1, 'b2' col2, 'c2' col3, 'd2' col4, 1 col_ver,  107 col7
    24     from dual
    25   union all
    26   select 'a1' col1, 'b1' col2, 'c1' col3, 'd1' col4, 2 col_ver,  108 col7
    27     from dual
    28   union all
    29   select 'a1' col1, 'b2' col2, 'c2' col3, 'd1' col4, 2 col_ver,  109 col7
    30     from dual
    31   union all
    32   select 'a1' col1, 'b2' col2, 'c2' col3, 'd2' col4, 2 col_ver,  110 col7
    33     from dual
    34   union all
    35   select 'a2' col1, 'b2' col2, 'c1' col3, 'd2' col4, 2 col_ver,  111 col7
    36     from dual
    37   union all
    38   select 'a2' col1, 'b2' col2, 'c2' col3, 'd2' col4, 2 col_ver,  112 col7
    39     from dual
    40   union all
    41   select 'a1' col1, 'b2' col2, 'c2' col3, 'd2' col4, 3 col_ver,  113 col7
    42     from dual
    43   union all
    44   select 'a2' col1, 'b1' col2, 'c1' col3, 'd1' col4, 3 col_ver,  114 col7
    45     from dual)
    46  select * from t;
    CO CO CO CO    COL_VER       COL7
    a1 b1 c1 d1          1        100
    a1 b2 c1 d1          1        101
    a1 b2 c2 d1          1        102
    a1 b2 c2 d2          1        103
    a2 b1 c1 d1          1        104
    a2 b2 c1 d1          1        105
    a2 b2 c2 d1          1        106
    a2 b2 c2 d2          1        107
    a1 b1 c1 d1          2        108
    a1 b2 c2 d1          2        109
    a1 b2 c2 d2          2        110
    a2 b2 c1 d2          2        111
    a2 b2 c2 d2          2        112
    a1 b2 c2 d2          3        113
    a2 b1 c1 d1          3        114
    15 rows selected.
    SQL> -- from here we only need to get those with the lastest value for the col_ver
    SQL> with t as
      2  (select 'a1' col1, 'b1' col2, 'c1' col3, 'd1' col4, 1 col_ver,  100 col7
      3     from dual
      4   union all
      5   select 'a1' col1, 'b2' col2, 'c1' col3, 'd1' col4, 1 col_ver,  101 col7
      6     from dual
      7   union all
      8   select 'a1' col1, 'b2' col2, 'c2' col3, 'd1' col4, 1 col_ver,  102 col7
      9     from dual
    10   union all
    11   select 'a1' col1, 'b2' col2, 'c2' col3, 'd2' col4, 1 col_ver,  103 col7
    12     from dual
    13   union all
    14   select 'a2' col1, 'b1' col2, 'c1' col3, 'd1' col4, 1 col_ver,  104 col7
    15     from dual
    16   union all
    17   select 'a2' col1, 'b2' col2, 'c1' col3, 'd1' col4, 1 col_ver,  105 col7
    18     from dual
    19   union all
    20   select 'a2' col1, 'b2' col2, 'c2' col3, 'd1' col4, 1 col_ver,  106 col7
    21     from dual
    22   union all
    23   select 'a2' col1, 'b2' col2, 'c2' col3, 'd2' col4, 1 col_ver,  107 col7
    24     from dual
    25   union all
    26   select 'a1' col1, 'b1' col2, 'c1' col3, 'd1' col4, 2 col_ver,  108 col7
    27     from dual
    28   union all
    29   select 'a1' col1, 'b2' col2, 'c2' col3, 'd1' col4, 2 col_ver,  109 col7
    30     from dual
    31   union all
    32   select 'a1' col1, 'b2' col2, 'c2' col3, 'd2' col4, 2 col_ver,  110 col7
    33     from dual
    34   union all
    35   select 'a2' col1, 'b2' col2, 'c1' col3, 'd2' col4, 2 col_ver,  111 col7
    36     from dual
    37   union all
    38   select 'a2' col1, 'b2' col2, 'c2' col3, 'd2' col4, 2 col_ver,  112 col7
    39     from dual
    40   union all
    41   select 'a1' col1, 'b2' col2, 'c2' col3, 'd2' col4, 3 col_ver,  113 col7
    42     from dual
    43   union all
    44   select 'a2' col1, 'b1' col2, 'c1' col3, 'd1' col4, 3 col_ver,  114 col7
    45     from dual)
    46  select a.col1, a.col2, a.col3, a.col4, a.col_ver, a.col7
    47    from (select t.col1, t.col2, t.col3, t.col4, t.col_ver, t.col7,
    48                 row_number() over (partition by t.col1, t.col2, t.col3, t.col4
    49                                    order by t.col_ver desc) rn
    50            from t) a
    51   where a.rn = 1;
    CO CO CO CO    COL_VER       COL7
    a1 b1 c1 d1          2        108
    a1 b2 c1 d1          1        101
    a1 b2 c2 d1          2        109
    a1 b2 c2 d2          3        113
    a2 b1 c1 d1          3        114
    a2 b2 c1 d1          1        105
    a2 b2 c1 d2          2        111
    a2 b2 c2 d1          1        106
    a2 b2 c2 d2          2        112
    9 rows selected.
    SQL>

  • Looking for best Cutover Approach.

    Hi Experts,
    We are in the Testing Phase and are looking out for approaches for Cutover. Please note current set up in PRD is 3.5 Backend running on BI 7 Frontend. There is only 1 server and client each for DEV, TST and PRD with a spare server targeted for set up as application server in the near future :
    i) Option 1 - parallel 3.5 and 7 environments ((if technically feasible), preferred by client)
    -  use application server in the interim as backup server with copy of current PRD while cutover takes place on main server. Team still is unsure technically how much effort and level of difficulty to either  Update R/3 to BI 7 then update BW 3.5 or (ii) Set up R/3 to update both 3.5 and 7. Detailed investigation required but both cases require additional DEV and TST boxes to simulate chose option.
    ii) Option 2 - big bang to BI 7 (medium risk, recommended by us)
    -  once BI 7 objects migrated to TST and DEV, Support team and users are no longer able to access old reports to perform fixes or data validation.
    Would you be able to advise on which approach is typically taken during 3.5 to 7 upgrades and if Option 1, what are the complexities and considerations required.
    Your time is much appreciated.
    Thanks,
    Chandu

    the way it works usually is ..
    Take a copy of PRD and migrate. Test this system completely - usually takes about 2 to 4 weeks of full testing and signoff.
    The you will go into a DEV FREEZE - this means that no requests can be transported in this period.
    Upgrade DEV and test - takes about 2 weeks since you already know the critical objects to test.
    Teh upgrade QA - test for 2 to 3 days since most of the functionality has already been tested twice.
    Upgrade PRD and go live on 7.0.
    Also you can look at splitting the upgrade into Database upgradation first and then once there are no issues - go ahead with NW upgrade....

  • Best Slideshow Approach....

    Hello There,
    I was just wondering what the best way to create a sideshow for several band photo's for a web site would be.
    I have iLife, QuickTime Pro, an older version of Macromedia Dreamweaver MX 2004, RapidWeaver 3.6.7, and I'm still using 10.4.11.
    What's the most compatible across the board, Flash, QuickTime, or what??
    Of course it will have music etc..
    I'd like to have the photo's fairly good sized, yet not too big file size wise..
    Thanks for any help you can offer..
    Eric
    Message was edited by: Eric Cappotto

    Easiest and smallest file size would a QuickTime file made with Pro.
    Simply open your image files, copy, switch to the audio track, make a selection and add the image file "Scaled to the selection".
    QuickTime .mov files can have up to 99 "tracks" and each can have video "masks", "transparency" levels and "layers". This allows you to have a video "background" image and 97 other images tracks positioned anywhere over the background.
    QuickTime .mov files can also have "text" tracks. Real, scalable text that can be scrolled, colored and even use "karaoke" styles.
    QuickTime .mov files can also contain "links". A click on the file could send the viewer to a Web page for more information (a Web page that sells your music for example). Some of mine as examples:
    http://homepage.mac.com/kkirkster/03war/
    http://homepage.mac.com/kkirkster/64/
    http://homepage.mac.com/kkirkster/Lemon_Trees/
    http://homepage.mac.com/kkirkster/RedneckTexasXmas/
    http://homepage.mac.com/kkirkster/Nationals/ (made using iPhoto).

Maybe you are looking for

  • Undo redo cut copy paste delete are GRAYED out in mozilla

    windowsXP

  • Resolution of extended type

    I have a VI where I'm performing various arithmetic calculations to an extended variable of 24 characters. Somehow, however, when I attempt to add a number like 256 to 100,000,000,000,000,000,000,000, it loses it. I've done a little messing around, a

  • Extra lines in rendered pages

    Hi All, We're experiencing the following problem in our application (which usues Oracle Application Server 10g Release 2 (10.1.2)): In rendered jsp pages empty lines are being generated what causes errors in Java Script code. For example, the rendere

  • Problem of Raw Material & Finished product

    Dear Experts, My problem is related to Item Data Master when i am trying to select in Excisable Category Material Type Finished Goods,Raw Material & Capital Goods then opposite result i get. When i select Raw Material Category then i get result Finis

  • How do I get images of movies to show on the ipad video selection screen.

    All I get are the titles and a black box showing theres something there but I would like to see an image. My daughter cant read so she cant choose by picture..