VO query with SQL table function (MS SQL) and update of parameter value

Hello All,
I use VO where Query is the SQL function that returns a table (MS SQL Server):
select * from getData('XXXX')
where ‘ХХХХ’ should be binded to the specific ID (e.g. selected row in the tree view on the another facet).
More details: I have two facets. Left facet contains a tree view that displays the list of the projects grouped by industry. Right facet contains a pivot table (source query is a ‘table function’ from above).
I want to have the pivot table (right facet) to be updated with the data specific to the project (selected row in the tree).
I know about the possibility to dynamically update the where clause (setNamedWhereClauseParam method), but in my case there is no ‘where’ clause. Is there any kind of ‘bind variable’ to be used to refresh the data?
Any guidance, ideas, examples are greatly welcome.
With best wishes,
Anatol

Are the columns that are being selected different each time or the same?
If they are the same then you don't need to do any additional binding - the pivot table is connected to the VO at design time and it wouldn't care if the underlying from/where have changed.
If on the other hand your columns change (but the number of them stay the same) then use aliases for the initial definition (select ename a, id b, salary c) and then when you change the VO change the mapping but keep the names (select dname a, moo b, foo c).
If the whole query is changing and you don't know in advance the structure - then there is no out of the box solution with a pivot table - you'll need to code a backing bean that will provide the data model to the pivot table based on your query.

Similar Messages

  • Performance issues with pipelined table functions

    I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
    Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
    Many thanks in advance.
    CREATE OR REPLACE PACKAGE pipeline_example
    IS
       TYPE resultset_typ IS REF CURSOR;
       TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
       TYPE table_typ IS TABLE OF row_typ;
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ;
       c_default_limit   CONSTANT PLS_INTEGER := 100;  
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ);
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ);
    END pipeline_example;
    CREATE OR REPLACE PACKAGE BODY pipeline_example
    IS
       FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
          RETURN resultset_typ
       IS
          o_resultset   resultset_typ;
       BEGIN
          OPEN o_resultset FOR
             SELECT colC, colD, colE
               FROM some_table
              WHERE colA = ArgA AND colB = argB;
          RETURN o_resultset;
       END base_query;
       FUNCTION processor (
          p_source_data   IN resultset_typ,
          p_limit_size    IN PLS_INTEGER DEFAULT c_default_limit)
          RETURN table_typ
          PIPELINED
          PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
       IS
          aa_source_data   table_typ;-- := table_typ ();
       BEGIN
          LOOP
             FETCH p_source_data
             BULK COLLECT INTO aa_source_data
             LIMIT p_limit_size;
             EXIT WHEN aa_source_data.COUNT = 0;
             /* Process the batch of (p_limit_size) records... */
             FOR i IN 1 .. aa_source_data.COUNT
             LOOP
                PIPE ROW (aa_source_data (i));
             END LOOP;
          END LOOP;
          CLOSE p_source_data;
          RETURN;
       END processor;
       PROCEDURE with_pipeline (argA          IN     VARCHAR2,
                                argB          IN     VARCHAR2,
                                o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT /*+ PARALLEL(t, 5) */ colC,
                      SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM TABLE (processor (base_query (argA, argB),100)) t
             GROUP BY colC
             ORDER BY colC
       END with_pipeline;
       PROCEDURE no_pipeline (argA          IN     VARCHAR2,
                              argB          IN     VARCHAR2,
                              o_resultset      OUT resultset_typ)
       IS
       BEGIN
          OPEN o_resultset FOR
               SELECT colC,
                      SUM (CASE WHEN colD > colE AND colE  != '0' THEN colD / ColE END)de,
                      SUM (CASE WHEN colE > colD AND colD  != '0' THEN colE / ColD END)ed,
                      SUM (CASE WHEN colD = colE AND colD  != '0' THEN 1 END) de_one,
                      SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
                 FROM (SELECT colC, colD, colE
                         FROM some_table
                        WHERE colA = ArgA AND colB = argB)
             GROUP BY colC
             ORDER BY colC;
       END no_pipeline;
    END pipeline_example;
    ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
    Edited by: Earthlink on Nov 14, 2010 11:31 AM
    Edited by: Earthlink on Nov 14, 2010 11:32 AM
    Edited by: Earthlink on Nov 20, 2010 12:04 PM
    Edited by: Earthlink on Nov 20, 2010 12:54 PM

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Need some help with the Table Function Operator

    I'm on OWB 10gR2 for Sun/Solaris 10 going against some 10gR2 DB's...
    I've been searching up and down trying to figure out how to make OWB use a Table Function (TF) which will JOIN with another table; allowing a column of the joined table to be a parameter in to the TF. I can't seem to get it to work. I'm able to get this to work in regular SQL, though. Here's the setup:
    -- Source Table:
    DROP TABLE "ZZZ_ROOM_MASTER_EX";
    CREATE TABLE "ZZZ_ROOM_MASTER_EX"
    ( "ID" NUMBER(8,0),
    "ROOM_NUMBER" VARCHAR2(200),
    "FEATURES" VARCHAR2(4000)
    -- Example Data:
    Insert into ZZZ_ROOM_MASTER_EX (ID,ROOM_NUMBER,FEATURES) values (1,'Room 1',null);
    Insert into ZZZ_ROOM_MASTER_EX (ID,ROOM_NUMBER,FEATURES) values (2,'Room 2',null);
    Insert into ZZZ_ROOM_MASTER_EX (ID,ROOM_NUMBER,FEATURES) values (3,'Room 3','1,1;2,3;');
    Insert into ZZZ_ROOM_MASTER_EX (ID,ROOM_NUMBER,FEATURES) values (4,'Room 4','5,2;5,4;');
    Insert into ZZZ_ROOM_MASTER_EX (ID,ROOM_NUMBER,FEATURES) values (5,'Room 5',' ');
    -- Destination Table:
    DROP TABLE "ZZZ_ROOM_FEATURES_EX";
    CREATE TABLE "ZZZ_ROOM_FEATURES_EX"
    ( "ROOM_NUMBER" VARCHAR2(200),
    "FEATUREID" NUMBER(8,0),
    "QUANTITY" NUMBER(8,0)
    -- Types for output table:
    CREATE OR REPLACE TYPE FK_Row_EX AS OBJECT
    ID NUMBER(8,0),
    QUANTITY NUMBER(8,0)
    CREATE OR REPLACE TYPE FK_Table_EX AS TABLE OF FK_Row_EX;
    -- Package Dec:
    CREATE OR REPLACE
    PACKAGE ZZZ_SANDBOX_EX IS
    FUNCTION UNFK(inputString VARCHAR2) RETURN FK_Table_EX;
    END ZZZ_SANDBOX_EX;
    -- Package Body:
    CREATE OR REPLACE
    PACKAGE BODY ZZZ_SANDBOX_EX IS
    FUNCTION UNFK(inputString VARCHAR2) RETURN FK_Table_EX
    AS
    RETURN_VALUE FK_Table_EX := FK_Table_EX();
    i NUMBER(8,0) := 0;
    BEGIN
    -- TODO: Put some real code in here that will actually read the
    -- input string, parse it out, and put data in to RETURN_VALUE
    WHILE(i < 3) LOOP
    RETURN_VALUE.EXTEND;
    RETURN_VALUE(RETURN_VALUE.LAST) := FK_Row_EX(4, 5);
    i := i + 1;
    END LOOP;
    RETURN RETURN_VALUE;
    END UNFK;
    END ZZZ_SANDBOX_EX;
    I've got a source system built by lazy DBA's and app developers who decided to store foreign keys for many-to-many relationships as delimited structures in driving tables. I need to build a generic table function to parse this data and return it as an actual table. In my example code, I don't actually have the parsing part written yet (I need to see how many different formats the source system uses first) so I just threw in some stub code to generate a few rows of 4's and 5's to return.
    I can get the data from my source table to my destination table using the following SQL statement:
    -- from source table joined with table function
    INSERT INTO ZZZ_ROOM_FEATURES_EX(
    ROOM_NUMBER,
    FEATUREID,
    QUANTITY)
    SELECT
    ZZZ_ROOM_MASTER_EX.ROOM_NUMBER,
    UNFK.ID,
    UNFK.QUANTITY
    FROM
    ZZZ_ROOM_MASTER_EX,
    TABLE(ZZZ_SANDBOX_EX.UNFK(ZZZ_ROOM_MASTER_EX.FEATURES)) UNFK
    Now, the big question is--how do I do this from OWB? I've tried several different variations of my function and settings in OWB to see if I can build a single SELECT statement which joins a regular table with a table function--but none of them seem to work, I end up getting SQL generated that won't compile because it doesn't see the source table right:
    INSERT
    /*+ APPEND PARALLEL("ZZZ_ROOM_FEATURES_EX") */
    INTO
    "ZZZ_ROOM_FEATURES_EX"
    ("ROOM_NUMBER",
    "FEATUREID",
    "QUANTITY")
    (SELECT
    "ZZZ_ROOM_MASTER_EX"."ROOM_NUMBER" "ROOM_NUMBER",
    "INGRP2"."ID" "ID_1",
    "INGRP2"."QUANTITY" "QUANTITY"
    FROM
    (SELECT
    "UNFK"."ID" "ID",
    "UNFK"."QUANTITY" "QUANTITY"
    FROM
    TABLE ( "ZZZ_SANDBOX_EX"."UNFK2" ("ZZZ_ROOM_MASTER_EX"."FEATURES")) "UNFK") "INGRP2",
    "ZZZ_ROOM_MASTER_EX" "ZZZ_ROOM_MASTER_EX"
    As you can see, it's trying to create a sub-query in the FROM clause--causing it to just ask for "ZZZ_ROOM_MASTER_EX"."FEATURES" as an input--which isn't available because it's outside of the sub-query!
    Is this some kind of bug with the code generator or am I doing something seriously wrong here? Any help will be greatly appreciated!

    Hello Everybody!
    Thank you for all your response!
    I had changes this work area into Internal table and changed the select query. PLease let me know if this causes any performance issues?
    I had created a Z table with the following fields :
    ZADS :
    MANDT
    VKORG
    ABGRU.
    I had written a select query as below :
    I had removed the select single and insted of using the Structure it_rej, I had changed it into Internal table 
    select vkorg abgru from ZADS into it_rej.
    Earlier :
    IT_REJ is a Work area:
    DATA : BEGIN OF IT_REJ,
    VKORG TYPE VBAK-VKORG,
    ABGRU TYPE VBAP-ABGRU,
    END OF IT_REJ.
    Now :
    DATA : BEGIN OF IT_REJ occurs 0,
    VKORG TYPE VBAK-VKORG,
    ABGRU TYPE VBAP-ABGRU,
    END OF IT_REJ.
    I guess this will fix the issue correct?
    PLease suggest!
    Regards,
    Developer.

  • BaseTableName blank when calling GetSchema on a query with multiple tables

    I am using ODP.NET 11.2.0.3.0 and when calling GetSchemaTable on a DataReader that contains a join the returned SchemaTable has the BaseTableName and BaseColumnName fields blank - this is different than what I see with the Oracle OLE DB Provider and with how SQL Server's native provider works. I can't find any discussion of this - is this on purpose or is it a bug? why does the available schema information vary so drastically between a single table query and a query with multiple tables joined?
    Thanks,
    Bryan Hinton

    Hi Bryan,
    I am also facing the same issue. Did u find any work around or any suggestions will be well appreciated.
    Thanks,
    Naresh.

  • Parse comma separated value and map with other table to get Name and change it back to comma separate.

    Hi,
    I have one existing view(with around 15 fields), in which I have to add few more fields from table called PI.
    Now these fields have values like (55C4444F-D83B-4F96-A011-367A3755BA6C , F52388E2-485B-49DF-8534-FDF46D23F59E , 722432E1-F063-4CBD-B83D-1B97836E82953) 3 values comma separated.(Sometimes only one value and sometimes 4 or 5 or 7-8 depend on user has entered
    on web page)
    Also I have another table called PHA and this tables has 2 fields Values and Name so I have to map this two tables based on VALUES fields and get Name from this PHA table and show in view and that also Comma separated.
    So basically I have to Parse the PI table's Values field 1st, map it with PHA table to get Name and then Make it comma separated in that existing view.
    To make fields comma separate I used below query,
    (SELECT DISTINCT SUBSTRING
                SELECT ','+ PI.[Name]  AS [text()]
                FROM [DB].[dbo].[Table] PHA1
    Inner Join  [DB].[dbo].[Table] PI
    ON PHA.[Value] = PI.[VALUE]
                WHERE PHA1.PId =PHA2.PId and PHA1.CId = PHA2.CId
                ORDER BY PHA1.PId
                For XML PATH ('')
            ), 2, 1000) 
    FROM [DB].[dbo].[Table] PHA2
    Inner Join [cSharpSite_profiles].[dbo].[PetAllergies] PA
    Inner Join  [DB].[dbo].[Table] PI
    ON PHA.[Value] = PI.[VALUE]
    ) [Name]
    Vicky

    Wait, this sounds wrong. You have a view where you group values into a comma-separated list. While that surely will make some purists cringe, I can see that it makes sense from a presentation perspective.
    But if you want to use these concatenated values as atomic values again, you should go back to the base tables and them from there. Building views on views may sometimes be a good idea, but if you are too keen on reuse you can cause a performance disaster.
    So do it right from the beginning.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • SQL query with multiple tables - what is the most efficient way?

    Hello I am learning PL/SQL. I have a simple procedure where I need to find number of employees and departments per location as per user input of location_id.
    I have 3 Tables:
    LOCATIONS
    location_id (pk)
    location_name
    DEPARTMENTS
    department_id (pk)
    location_id (fk)
    department_name
    EMPLOYEES
    employee_id (pk)
    department_id (fk)
    employee_name
    1 Location can have 0-MANY Departments
    1 Employee has 1 Department
    Here is the query I came up with for PL/SQL procedure:
    /*Ecount, Dcount are NUMBER variables */
    SELECT SUM (EmployeeCount), COUNT(DepartmentNumber)
         INTO Ecount, Dcount
         FROM     
         (SELECT COUNT(employee_id) EmployeeCount, department_id DepartmentNumber
              FROM employees
              GROUP BY department_id
              HAVING department_id IN
                        (SELECT department_id
                        FROM departments
                        WHERE location_id = userInput));
    I do get the correct result, but I am just wondering if my query is on the right track and if there is a more "efficient" way of doing this.
    Thanks in advance for helping a newbie out.

    Hi,
    Welcome to the forum!
    Something like this will be more efficient:
    SELECT    COUNT (employee_id)               AS ECount
    ,       COUNT (DISTINCT department_id)     AS DCount
    FROM       employees
    WHERE       department_id IN (     SELECT     department_id
                        FROM      departments
                        WHERE      location_id = :userInput
    ;You should also try a join instead of the IN subquery.
    For efficiency, do only the things you need to do.
    For example, you don't need a count of employees in each department, so don't compute one. That means you won't need the in-line view, so don't have one.
    You don't need PL/SQL for this job, so don't use PL/SQL if you don't have to. (I realize this question was out of context, so you may have good reasons for doing this in PL/SQL.)
    Do all filtering as early as possible. Don't waste effort computing things that won't be used .
    A particular example of this is: Never use a HAVING clause when you can use a WHERE clause. What's the difference between a WHERE clause and a HAVING clause? The WHERE clause is applied before aggregate functions are computed, and the HAVING clause is applied after; there's no other difference. Therefore, if the HAVING clause isn't referencing an aggregate function, it could be done in a WHERE clause instead.

  • SQL query with From table being entered in twice

    I have a query that is doing an exist clause with the inner select statement querying the T_PROJECTS table using two different alias "t3 & t2".
    SELECT t0.ID, t0.EFFECTIVEDATE, t0.CUSTOMERID, t0.PROJECTID, .... FROM T_FINANCIALDATAS t0 WHERE ((t0.PROJECTID = 2271) AND (EXISTS (SELECT t1.TMPUSERID FROM T_PROJECTS t3, T_PROJECTS t2 , T_PROJECTSUSERS t1 WHERE (((( t2.CUSTOMERID = 1) AND (t1.USERID = 2276)) AND (t1.PROJECTID = t3.ID )) AND ( t2.ID = t1.PROJECTID))) AND (t0.CUSTOMERID = 1)))
    They need to be combined so I'm using only one table alias. They way the query get's built is a little complicated. The Exist clause get's built as a ReportQuery and then that get's passed into another method which eventually is added to a top level Expression as such:
    ReportQuery existsQuery = new ReportQuery(existsClass, existsExpressionHolder.getExpression());
    expHolder.addAnd(expHolder.getExpressionBuilder().exists(existsQuery));
    Question: what causes the double table alias from showing up? From what I read it seems like it's caused when you use two different ExpressionBuilders?

    Could you include the code that builds the Expression.
    Perhaps try to reproduce the issue in a simple isolated example. Generally every ExpressionBuilder used in a query represents a table/alias.
    What version of TopLink are you using?
    The duplicate alias seems repetitive, but doesn't seem like it will have any effect on the query result, other than its efficiency.
    James : http://www.eclipselink.org

  • SQL Query with 3 tables to create a view

    Hi
    I have got an existing view "View_output" formed with a query which works fine:
    select A,B,C,D,E,F,G,H from GTS1 where D is not null
        UNION
       select A,B,C,D,E,F,G,H from GTSN1 where D is not null;It works fine, in the output I have columns A,B,C,D,E,F,G,H
    New requirement : Modify the view, by adding a new column 'I' so that the view should have
    columns A through I, and no column values should be null.
    This column "I" is coming from a table, say DLC
    Data in DLC is a subset of the original view "View_output"and DLC have the foll.columns:
    ("View_output" is a datablock source for a form and only selected column values are put as output in DLC
    after form manipulation)
    DLC is having only 5 columns -B,C,F,H,I
    When I give:
    (select A,B,C,D,E,F,G,H,NULLfrom GTS1 where D is not null
        UNION
       select A,B,C,D,E,F,G,H,NULL from GTSN1 where D is not null)
    UNION
    select NULL,B,C,NULL,NULL,F,NULL,H,I
        from
        DLC where I IS NOT NULL;It UNIONS all required rows in the 3 tables but columns A,D,E,G are NULL, but how can I get
    data in these columns?
    Edited by: Krithi on 30-Oct-2009 05:43
    Edited by: Krithi on 30-Oct-2009 08:13

    Hi arun thats not the real isue..
    I have corrected the second code now, it was a typo from my side..
    all three tables already have 9 columns selected..
    The view created from GTS1 and GTSN1 is constantly changing
    There is no way to add column 'I' to any of these above tables
    The view is the input to a form..User ticks a tickbox corresponding to records needed by them (These are the columns in the original view mentioned above)..and based on certain logic,only selected columns are now inserted/updated into the output table through the form..this is the table DLC
    Now they want to insert a new data(free text) through the form, this field correspond to each record in the form ;this is supposed to be column 'I'(not designed yet)..
    this is not there in the original input tables selected for the view...But then I added this column to the output table DLC (Cant think of any other method) so that column I can also get combined in the view.
    Now what I want is recreate the view through,
    create or replace view statement.
    the view should effectively pull data from 3 tables .
    The relation between 3 tables :
    DLC is a subset of GTS1 and GTSN1
    but may also contain data which is not in union of GTS1 and GTSN1
    Only field I need from DLC is field 'I', and that too if the data is there in the union of GTS1 and GTSN1

  • Issues with new tables created in SQL Modeler

    Hi,
    whenever I create new tables in SQL Modeler, and then on the Oracle DB using the generatedDDL , strange things happen:
    When synchronizing the model with the Database, some tables, even if already existing on the DB, appear as missing and the generated DDL still contains the "CREATE TABLE ..." statement
    When synchronizing the Database with the model, the new tables created on the DB appear twice, once with the correct name and the 2nd time with the original name and the suffix "v1".
    There seems to be no way to properly synchronize the model with the DB, apart from removing the tables from the model and re-import the definitions from the DB.
    What am I doing wrong ? It seems really strange to me that no one has ever reported a bug like this!
    Thank you

    Hi,
    whenever I create new tables in SQL Modeler, and then on the Oracle DB using the generatedDDL
    There seems to be no way to properly synchronize the model with the DB, apart from removing the tables from the model and re-import the definitions from the DB
    I'm able to reproduce your problem in case tables are imported into model using more than one connections. You need to use "Sync New Objects" in compare models dialog and then there will be no need to delete and import
    again tables from database. Unfortunately "Sync New Objects" functionality doesn't work in case objects are imported using more then one connections. I logged a bug for that.
    Philip

  • SSIS 2008 – Read roughly 50 CSV files from a folder, create SQL table from them dynamically, and dump data.

    Hello everyone,
    I’ve been assigned one requirement wherein I would like to read around 50 CSV files from a specified folder.
    In step 1 I would like to create schema for this files, meaning take the CSV file one by one and create SQL table for it, if it does not exist at destination.
    In step 2 I would like to append the data of these 50 CSV files into respective table.
    In step 3 I would like to purge data older than a given date.
    Please note, the data in these CSV files would be very bulky, I would like to know the best way to insert bulky data into SQL table.
    Also, in some of the CSV files, there will be 4 rows at the top of the file which have the header details/header rows.
    According to my knowledge I would be asked to implement this on SSIS 2008 but I’m not 100% sure for it.
    So, please feel free to provide multiple approaches if we can achieve these requirements elegantly in newer versions like SSIS 2012.
    Any help would be much appreciated.
    Thanks,
    Ankit
    Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.com

    Hello Harry and Aamir,
    Thank you for the responses.
    @Aamir, thank you for sharing the link, yes I'm going to use Script task to read header columns of CSV files, preparing one SSIS variable which will be having SQL script to create the required table with if exists condition inside script task itself.
    I will be having "Execute SQL task" following the script task. And this will create the actual table for a CSV.
    Both these components will be inside a for each loop container and execute all 50 CSV files one by one.
    Some points to be clarified,
    1. In the bunch of these 50 CSV files there will be some exception for which we first need to purge the tables and then insert the data. Meaning for 2 files out of 50, we need to first clean the tables and then perform data insert, while for the rest 48
    files, they should be appended on daily basis.
    Can you please advise what is the best way to achieve this requirement? Where should we configure such exceptional cases for the package?
    2. For some of the CSV files we would be having more than one file with the same name. Like out of 50 the 2nd file is divided into 10 different CSV files. so in total we're having 60 files wherein the 10 out of 60 have repeated file names. How can we manage
    this criteria within the same loop, do we need to do one more for each looping inside the parent one, what is the best way to achieve this requirement?
    3. There will be another package, which will be used to purge data for the SQL tables. Meaning unlike the above package, this package will not run on daily basis. At some point we would like these 50 tables to be purged with older than criteria, say remove
    data older than 1st Jan 2015. what is the best way to achieve this requirement?
    Please know, I'm very new in SSIS world and would like to develop these packages for client using best package development practices.
    Any help would be greatly appreciated.
    Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.com
    1. In the bunch of these 50 CSV files there will be some exception for which we first need to purge the tables and then insert the data. Meaning for 2 files out of 50, we need to first clean the tables and then perform
    data insert, while for the rest 48 files, they should be appended on daily basis.
    Can you please advise what is the best way to achieve this requirement? Where should we configure such exceptional cases for the package?
    How can you identify these files? Is it based on file name or are there some info in the file which indicates
    that it required a purge? If yes you can pick this information during file name or file data parsing step and set a boolean variable. Then in control flow have a conditional precedence constraint which will check the boolean variable and if set it will execute
    a execte sql task to do the purge (you can use TRUNCATE TABLE or DELETE FROM TableName statements)
    2. For some of the CSV files we would be having more than one file with the same name. Like out of 50 the 2nd file is divided into 10 different CSV files. so in total we're having 60 files wherein the 10 out of 60 have
    repeated file names. How can we manage this criteria within the same loop, do we need to do one more for each looping inside the parent one, what is the best way to achieve this requirement?
    The best way to achieve this is to append a sequential value to filename (may be timestamp) and then process
    them in sequence. This can be done prior to main loop so that you can use same loop to process these duplicate filenames also. The best thing would be to use file creation date attribute value so that it gets processed in the right sequence. You can use a
    script task to get this for each file as below
    http://microsoft-ssis.blogspot.com/2011/03/get-file-properties-with-ssis.html
    3. There will be another package, which will be used to purge data for the SQL tables. Meaning unlike the above package, this package will not run on daily basis. At some point we would like these 50 tables to be purged
    with older than criteria, say remove data older than 1st Jan 2015. what is the best way to achieve this requirement?
    You can use a SQL script for this. Just call a sql procedure
    with a single parameter called @Date and then write logic like below
    CREATE PROC PurgeTableData
    @CutOffDate datetime
    AS
    DELETE FROM Table1 WHERE DateField < @CutOffDate;
    DELETE FROM Table2 WHERE DateField < @CutOffDate;
    DELETE FROM Table3 WHERE DateField < @CutOffDate;
    GO
    @CutOffDate which denote date from which older data have to be purged
    You can then schedule this SP in a sql agent job to get executed based on your required frequency
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Materialized View with OLAP table function

    Hi,
    I am trying to materialize OLAP cubes into relational materialized views which works quite fine. After a few loadings in background in parallel with Database Jobs and DBMS_MVIEW package the peformance is getting poor. Steps I am performing:
    1. Generate materialized view as DEFERRED and COMPLETE refresh
    2. Generate database Jobs for refreshing views with DBMS_MVIEW.REFRESH function
    3. Running Jobs in background
    I have loading times the first time 10min after then over 4 hours. I also tried with ATOMIC_REFRESH=FALSE but the same result. Database is running in ARCHIVE LOGGING. Can this degrade the performance?
    Any ideas?
    Thanks,
    Christian

    Hi,
    yes thats correct. I am creating MVs in 10.2.0.3
    Here is an example:
    CREATE MATERIALIZED VIEW "FCRSGX"."MV_F_ICCC_C11"
    ORGANIZATION HEAP PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS NOLOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "FCRSGX_CONSO_RELATIONAL"
    BUILD DEFERRED
    USING INDEX
    REFRESH COMPLETE ON DEMAND
    USING DEFAULT LOCAL ROLLBACK SEGMENT
    DISABLE QUERY REWRITE
    AS SELECT ENTITY, REVE_ICC, RP_ICC, MV_ICC, PERIOD, MEASURE, AMOUNT, R2C
    FROM TABLE(OLAP_TABLE('FCRSGX.CONSODATA DURATION SESSION',
    DIMENSION ENTITY as varchar2(8) FROM ENTITY
    DIMENSION REVE_ICC as varchar2(8) FROM REVE_ICC
    DIMENSION RP_ICC as varchar2(8) FROM RP_ICC
    DIMENSION MV_ICC as varchar2(8) FROM MV_ICC
    DIMENSION PERIOD as varchar2(8) FROM GMONTH
    DIMENSION MEASURE as varchar2(30) FROM EXPR
    MEASURE AMOUNT as number FROM ICCC.C11
    LOOP CMPE.ICCC.C11
    ROW2CELL R2C '))
    WHERE OLAP_CONDITION(r2c, 'lmt entity to CMPE.ICCC.C11')=1
    AND OLAP_CONDITION(r2c, 'lmt reve_icc to CMPE.ICCC.C11')=1
    AND OLAP_CONDITION(r2c, 'lmt mmonth to sapload.per eq y')=1
    AND OLAP_CONDITION(r2c, 'lmt gmonth to charl(mmonth) ')=1
    AND OLAP_CONDITION(r2c, 'lmt rp_icc to CMP.ICCC.C11 ')=1
    AND OLAP_CONDITION(r2c, 'lmt mv_icc to CMP.ICCC.C11 ')=1
    AND OLAP_CONDITION(r2c, 'lmt expr to ''F.ICCC.C11'' ')=1
    MODEL
    DIMENSION BY(ENTITY,REVE_ICC,RP_ICC,MV_ICC,PERIOD,MEASURE)
    MEASURES(AMOUNT,R2C)
    RULES UPDATE SEQUENTIAL ORDER()
    ;

  • SQL Table Link between invoice and profit centre

    Hi,
    We are about to bulk import reserve invoices but I just need to allocate an invoice against the correct profit centre. Where is the link between sql tables?
    Edited by: Ricky Thomas on Mar 17, 2011 5:15 PM

    If you fill the profit centre on the line, the journal will allocate the profit centre to all GL accounts of Sales or Expenditure type.
    If you are using document expenses for freight, you may need to also assign your profit centre there.
    The line field is OCRCode in both the invoice lines & the expenses lines

  • Named query with join tables

    I have two tables
    - Process_Master (EVENT_ID, EVENT_TYP, STATUS)
    - Rel_Event( EVENT_ID, USERID, EVENT_DATE, RMKS)
    They have one-to-one relationship linked by EVENT_ID.
    I would like to create a named query like below joining 2 tables together.
    Do I need to create any class descriptor first and how? I want this query to be available from the ADF data control so I can drag and drop this to my JSP page as a ADF table. I have no problem in working with single table. I have read thru the developer guide and try out many things like multitable info, aggregate mapping and couldn't figure out how this can be done. Please help!!!
    SELECT A.EVENT_ID,B.EVENT_DATE, A.STATUS, B.RMKS
    FROM PROCESS_MASTER A, REL_EVENT B
    WHERE A.EVENT_ID = B.EVENT_ID
    AND A.STATUS = 'P';
    ********/

    I have tried the below but fail to retrieve any rows. Please help!
    Expression aid = new ExpressionBuilder(ProcEventMaster.class).get("event_id");
    Expression bid = new ExpressionBuilder(RelEvent.class).get("event_id");
    ReportQuery reportQuery = new ReportQuery(ProcEventMaster.class,aid.equal(bid));
    reportQuery.addAttribute("a_id", aid);
    reportQuery.addAttribute("b_id", bid);
    reportQuery.addAttribute("eventDate",bid.get("event_date"));
    reportQuery.addAttribute("remarks",bid.get("rmks"));
    reportQuery.setSelectionCriteria(aid.get("status").equal("P"));
    List<RelEvent> results =
    (List<RelEvent>)session.executeQuery(reportQuery);session.release();
    return results;

  • Set READ UNCOMMITED for a query with a table join (Ver 12.5. and 15.5)

    Hi all,
    this is an example of how to set the READ UNCOMMITED isolation level for a simple query:
    select col_a from mytable at isolation 0
    What about a query with an equi-join?
    Is it:
    select col_a
    from mytable t1, mytable t2
    at isolation 0
    I am sorry for the simple question, I did not find a auitable example in the documentation.
    Best Regards
    Arthur

    Yeah, the docs aren't very good at providing a robust set of examples so you've gotta:
    1 - review the command syntax and then
    2 - try a few examples until you find the format that works
    1 - From the ASE 15.5 Commands reference manual we find the following syntax:
    =========================
    select ::=
              select [all | distinct]
              [top unsigned_integer]
              select_list
              [into_clause]
              [from_clause]
              [where_clause]
              [group_by_clause]
              [having_clause]
              [order_by_clause]
              [compute_clause]
              [read_only_clause]
              [isolation_clause]
              [browse_clause]
              [plan_clause]
    =========================
    This shows us that the isolation clause is attached at the 'end' of the query as a query-level specifier (along with the 'group by', 'order by' and 'having' clauses).
    From this syntax I'd say your proposed query looks correct so ...
    2 - Try running your example to see if you get a syntax error ("When in doubt, try it out!"), eg:
    =========================
    select s.name,c.name
    from sysobjects s, syscolumns c
    where s.id = c.id
    at isolation 0
    go
    name
    sysobjects
    sysobjects
    sysobjects
    ... snip ...
    =========================

  • Report on BEx query with 2 structures (one in rows and one in columns)

    Hi, experts! I have to make Crystall report on BEx query with 2 structures, one in columns (with KF's), and one in rows. Is it possible to create such report? Because when I create such report, I cant see fields in structures, only characteristics fields.
    Ok, I found samr problem in another thread. Sorry.
    Edited by: Mikhail Sychev on Dec 5, 2009 9:53 PM

    Hey Flora,
    Happy to hear that its working now.
    Answering your question, again its upto the connection and report format you are using. Based on your question i hope you your report output should be like this.
    You cannot map to two labels for the series, again this report format is possible only in cross tab through Webi. I would suggest you to concatenate the material and month in a dimension in webi like below.
    I have done the concatenation in excel level, i would suggest you to do that in webi. Try to reduce the formula as much in excel.
    or
    If you are using Query browser connection, then i would suggest you to create a separate report which will display the actual vs plan material wise, here you need to pass the material as a prompt.
    Hope this helps in clear, please revert me for any clarification.

Maybe you are looking for

  • New Macbook Pro with Lion, installed Aperture - all photos disappear

    Just bought a MacBook Pro: 2.2 GHz Intel Core i7 4GB 1333 MHz DDR3 AMD Radeon HD 6750M 1024 MB Mac OS X Lion 10.7.1 My old computer crashed so I transfered a folder full of photos to my computer inside of a Documents file.  Some of these photos came

  • Need File name in PSE9

    I need to develop some macros to use with PSE9, but for me to do that, I need to retrieve the COMPLETE file name (including path) from PSE9 of the current edited file. Does PSE keep this info somewhere where I can get it?

  • Apple TV and 32" crt

    Hi to everybody, does the Apple TV work with a Thomson Scenium 32" 100 Hz cathodic tube? Thanks in advance for the replies!

  • Best Practices Data Extract from Essbase

    Hi there, I have been looking for information on Best Practices to extract data from Essbase (E). Using MDX I initially wanted to bulk extract data from E but apparently the process was never ending. As a 2d choice, I went for a simulation of an inte

  • RC Error Code = 0x84020020

    Does anyone know what this error code means.  I am loading content using Import Manager into SRM-MDM and receiving this error message.  The content has loaded successfully on another server/repository but not this one. The actual message reads "Impor