Filtering a Power Query Table Based on a Column That's a Table

I have a table produced by an outer join ... 
"Bananas" is not one of [My Fruit] values.  I'd like to filter my join results to only include [My Fruit] values, and I thought I could do that without expanding "NewCol".
Am I mistaken in my belief that such a selection can be done without expanding the "NewCol"?

hi Mark Weisman,
I suppose your [My Fruit] value is single values. You can try below :
 Table.SelectRows( Join1,  each  List.Contains( [NewCol] [Fruit] , [My Fruit] ) )
Note: [Fruit] is the column that contain value want to filter.
Below is my testing sample :
let
    Source = Table.FromRows({ {"North", "Apples"}, {"North", "Orange"}, {"West", "Bananas"},{"West", "Lemon"}}, {"Region", "Fruit"}),
    Group = Table.Group(Source, {"Region"}, {{"NewCol", each _, type table}}),
    Select = Table.SelectRows( Group, each List.Contains( [NewCol] [Fruit] , "Lemon" ) )
in
    Select
Regards,

Similar Messages

  • How to populate a table based on a row selection from another table.

    Hi, i just started to use ADF BC and Faces. Could some one help me or point me a solution on the following scenario .
    By using a search component , a table is being displayed as a search result. If i select any row in the resulted table , i need to populate an another table at the bottom of the same page from another view. These two tables are related by primary key . May i know how to populate a table based on a row selection from another table. Thanks
    ganesh

    I understand your requirement and the tutorial doesn't talk about Association between the views so that you can create a Master-Detail or in DB parlance, a Parent-Child relationship.
    I will assume that we are dealing with two entities here: Department and Employees where a particular Department has many Employees and hence a Parent-Child relationship.
    Firstly, you need to create an Association between the two Entities - Department and Employees. You can do that by right clicking on the model's entity and then associating the two entities with the appropriate key say, DepartmentId.
    Once you have done that, you need to link the two entities in the View section with this Association that you created. Then go to AppModule and make sure that in the Available View Objects: 'EmployeesView' appears under 'DepartmentView' as "EmployeesView via <link you created>". Shuttle the 'DepartmentView' to the right, Data Model and then shuttle
    "EmployeesView via <link you created>" to the right, Data Model under 'DepartmentView'.
    This will then be reflected in your Data Controls. After that, you simply would have to drag this View into your page as a Master-Detail form...and then when you run this page, any row selected in the Master table, would display the data in the Detail table.
    Also, refer to this link: [Master-Detail|http://baigsorcl.blogspot.com/2010/03/creating-master-detail-form-in-adf.html]
    Hope this helps.

  • Source table has an xml column that I need to query before splitting into 2 tables.

    Hi
    I have a source table with an xml column that I need to query on the nodes and then depending on what data is in the node then to split it into 2 different tables. I don't think I can use conditional split so I am looking at just a straight Execute SQL Task.
    Has anyone got any ideas on any other way of doing this?
    Thanks for the help in advance.
    Dave

    Hi TheDish,
    Your goal arguably can me accomplished using pure T-SQL only encompassing XPath or XQuery.
    E.g. the XQuery would retrieve the nodes as explained in http://sqljudo.wordpress.com/2013/12/02/xpath-for-the-sql-server-dba/
    And then you may want to use the WHERE clause to extract - insert the nodes that go to one or another table.
    I don't see why SSIS is necessary, but you can run the same queries in two Data Sources, or Execute SQL Tasks (that capture a resultset).
    Arthur
    MyBlog
    Twitter

  • Finding a TABLE based on a COLUMN NAME...

    I need to find all tables that that have a common column name.
    Upon searching the Forum, I found this nice nugget:
    How to find the table in a schema if I only know a particular column name
    The problem is, my USER_TAB_COLUMNS is EMPTY!
    If it had worked, then my statement would be:
    select * from USER_TAB_COLUMNS
    where COLUMN_NAME LIKE '%TEST%'
    and OWNER='ME';
    I get "no rows selected" (though I know they exist)
    if I do:
    select * from USER_TAB_COLUMNS
    I also get "no rows selected"
    Any ideas!
    Thanks!
    KSL.

    The user_xxx data dictionary tables show all of the whatevers that are owned by the logged in user. So, if you are logged in as user1 the user_tab_columns view will only show tables and views owned by user1.
    The all_xxx dictionary views show all of the whatevers that the logged in user has access to. so if you are logged in as user1 and user2 gave you selet privileges on tablea, then you would see tablea in all_tab_columns.
    The dba_xxx views show all objects in the database, but are generally only available to privileged users.
    If you have access to the dba_views, try:
    SELECT owner, object_type
    FROM dba_objects
    WHERE UPPER(object_name) = 'ANL'It is possible that the table and or its columns were created with double quotes so are case sensitve. It may also be a synonym, pointing to one of the tables that you got form your query.
    If you are only using all_tab_columns then it is possible that the user you are logged in as does not have privileges on the table anl so would not see it in the view.
    A long shot, but are you sure you are in the right database?
    John

  • Import flat file to multiple tables based on identifier column

    Hello,
    I am trying to setup a package that will import one pipe-delimited flat file (a utility bill) to multiple data tables based on the value of the first column.  I have been told it is similar in format to an EDI file, but there are some differences.
    The number of columns is consistent where the first columns are the same.  Meaning a record that has '00' in the first column will always have 10 columns; a record that has '01' in the first column will always have 9 columns; etc.
    Each value in the first column represents a separate destination data table.  Meaning a record that has '00' in the first column should be output to table '00'; a record that has '01' in the first column should be output to table '01'; etc.  All
    destination tables reside on the same SQL Server.
    Identifier columns can repeat multiple times throughout the flat file.  Meaning a record that starts with '01' may be repeated multiple times in the same.
    Sample Data:
    00|XXXXXXXX|XXX|XXXXXXXX|XXXXXX|XXXX|X|XXXXXXXXXX|XX|XXXXX
    01|XXXXXXXXXXX|XXX|XXXXXXXX|XXXXX|XXXXXXXXXXXXXXXXXXXX|XXXXXXXXXX|XXXXXXX|XXXXXXXXXXXXXX
    02|XXXXXXXXXXX|XXXXXXXX|XXXXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX
    04|XXXXXXXXXXX|XXXXXXXXXXXXX|XXX|XXXXXXXX
    05|XXXXXXXXXXX|XXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    01|XXXXXXXXXXX|XXX|XXXXXXXX|XXXXX|XXXXXXXXXXXXXXXXXXXX|XXXXXXXXXX|XXXXXXX|XXXXXXXXXXXXXX
    02|XXXXXXXXXXX|XXXXXXXX|XXXXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX
    04|XXXXXXXXXXX|XXXXXXXXXXXXX|XXX|XXXXXXXX
    Any help would be appreciated.

    Hi koldar.308,
    If there are few distinct values in the first column, we can use Flat File Source connect to that flat file, then use Conditional Split Transformation to split the first column to multiples, and then load the data to multiple tables with OLE DB Destination
    based on the outputs of Conditional Split.
    After testing the issue in my environment, please refer to the following steps to achieve this requirement:
    Drag a  Flat File Source connect to that flat file with Flat File Connection Manager.
    Drag a Conditional Split Transformation connects to the Flat File Source.
    Double-click the Conditional Split Transformation, add several Output based on the first column values as below:
    Drag same number OLE DB Destinations as the outputs of Conditional Split, connect to Conditional Split with one case output:
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    If you have any feedback on our support, please click
    here.
    Katherine Xiong
    TechNet Community Support

  • A table based on 400 columns

    Hi ,
    What is the best way to create a dummy table which has 400 columns ,the column names being known?
    Thanks.

    Some of Oracle's limits are based on the checkbox on the sales brochure theory ... not on relational reality.
    I have a table I am working with right now that has 921 columns. Designed by someone else at some distant point in the past and coming from a 1PB dataware house I am working with.
    I can not tell you what I think of the design in this forum and maintain my relationship with Oracle: Use your imagination and somewhere in the phrase you dream up use the phrase "full frontal."

  • Query to find the partitioned column(s) of a table.

    Guys,
    I've a partitioned table.I need to know the column(s) of the table which has been selected for partitioning.
    But none of the below tables below provide me the info:
    DBA_TABLES
    DBA_TAB_PARTITIONS
    DBA_TAB_SUBPARTITIONS
    Thoughts plz??
    Regards,
    Bhagat

    Guys,
    I've an index which is partitioned.
    From the resultset below,Would someone help me understand the range of values which each of the partition would hold?
    SQL> SELECT INDEX_NAME,PARTITION_NAME,HIGH_VALUE FROM DBA_IND_PARTITIONS WHERE INDEX_NAME='FACT_605';
    INDEX_NAME                     PARTITION_NAME                 HIGH_VALUE
    FACT_605                       P_A8_2114                      'A8', 2114, MAXVALUE
    FACT_605                       P_G8_2114                      'G8', '2114', MAXVALUE
    FACT_605                       Z_LAST_PARTITION               MAXVALUE, MAXVALUE, MAXVALUE
    Thanks,
    Bhagat                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Count Distinct based on another column in the same table

    Hello,
    My question in short: is is it possible to add a new column to a view which holds the DISTINCT COUNTS of values/domains of another column in the same view?
    For example, in the below table the column "Distinct Count of Occurence" shows how many distinct values a person has in the Occurence column. So AAA has 1 and 2 therefore it is 2 distinct values etc.
    My issues is that I can retrieve unique values bu Count (Select Occurence)but I can not add the new column that would add the records to the corresponding Persons in the above table.
    Is there an easy way to achieve this on the DWH level or should it be done with MDX in the cube?
    Thanks

    Hi,
    Below a solution to use the view by adding a column with window functioning, maybe this will help.
    CREATE TABLE #TMP
    PERSON VARCHAR(10),
    OCCURENCE SMALLINT
    --DROP TABLE #TMP
    INSERT INTO #TMP(PERSON,OCCURENCE)
    VALUES
    ('AAA','1'),
    ('AAA','2'),
    ('BBB','1'),
    ('BBB','1'),
    ('BBB','1'),
    ('CCC','1'),
    ('CCC','2'),
    ('CCC','3');
    --TRUNCATE TABLE #TMP
    WITH CTE
    AS
    SELECT PERSON
    ,OCCURENCE
    ,ROW_NUMBER() OVER(PARTITION BY PERSON ORDER BY OCCURENCE) AS RN
    FROM #TMP
    SELECT PERSON, MAX(RN) AS RN
    FROM CTE
    GROUP BY PERSON
    Regards,
    Reshma
    Please Vote as Helpful if an answer is helpful and/or Please mark Proposed as Answer or Mark As Answer when question is answered

  • How to find a specific word (from a list) in a sentence located in a different table, and build a column that contains the name

    Hi all, 
    I have 2 tables:
    1) Lookup table- "Names"- which contains ID# and product description (~100 names that can change).
    2) Data table- "Data"- which contains a column with sentences and within lies also the description. In addition, there is a quantity column.
    Example:
    Names table-
    ID
    Name
    1 Glass
    2 Paper
    3 Alum'
    4 Metal for construction
    100 Hydro therapy products
    And Data table-
    Description
    Qty
    Someone went to use the machine. 1.80 cm, Alum', for the rest of the day
    6
    Metal for construction is the major component in our system
    7
    <gs class="GINGER_SOFTWARE_mark" ginger_software_uiphraseguid="ba668f16-4de8-41d9-af39-d1b17422d6fe" id="ae5b76e8-6897-46c8-823b-86a382087f83">Kandoo</gs>, Delgado,
    Glass 17
    Greece needs more paper, 200 ft.
     45
    *there is no order where the lookup value can be found in the sentence*
    My purpose is to import the Data table while performing a calculation, that <gs class="GINGER_SOFTWARE_mark" ginger_software_uiphraseguid="1941bde7-1595-40eb-9d64-c2e3b9fabdef" id="b60549fc-5b92-46ea-abc6-72cc8c4c093b">checks</gs>
    for each row what is the suitable lookup value and then add it to a new column "Name". 
    Meaning, this table should be the new data after import:
    <gs class="GINGER_SOFTWARE_mark" ginger_software_uiphraseguid="f5d9da98-3ff6-4022-920d-8965a15730ee" id="580f945f-7822-4035-9b5b-542c74eff605">Description</gs>
    Qty                        
    Name
    Someone went to use the machine. 1.80 cm, Alum', for the rest of the day
    6 Alum'
    Metal for construction is the major component in our system
    7 Metal for construction
    <gs class="GINGER_SOFTWARE_mark" ginger_software_uiphraseguid="365dd7f8-7ffc-4ef3-95bf-9bf03421c90d" id="c9ba8515-3dba-4a6b-929e-650510f70844">Kandoo</gs>, Delgado, Glass
    17 Glass
    Greece needs more Paper, 200 ft.
     45 Paper
    Appreciate all support! 
    I lost hope and I want to avoid macros as much as possible.
    Thanks!
    עמית

    Hi Amit,
    I updated the file. The solution uses the following query formula:
    let
    Source = Excel.CurrentWorkbook(){[Name="DataTbl"]}[Content],
    #"Added Custom" = Table.AddColumn(Source, "Custom", each IndexTbl),
    #"Expand Custom" = Table.ExpandTableColumn(#"Added Custom", "Custom", {"Scope"}, {"Scope"}),
    #"Filtered Rows" = Table.SelectRows(#"Expand Custom", each Text.Contains([Description], [Scope])),
    #"Grouped Rows" = Table.Group(#"Filtered Rows", {"Topic", "Component Code", "Project", "Date", "TotalQty", "TotalQtyActual", "ID", "SOW #", " SOW#(lf)Rev", "Scope Driver", "Scope Source", "Scope Source #", "Cost Code", "Type", "Bdg", "System Code", "IMF Number", "IMF Description", "WG", "C or NC", "C/e", "LLE", "Dwg Ref", "Comments ", "Description", "Size", "Qty", "Unit", "Materials Cost/Unit ", "Sub-Total per Unit €", "Unit Rate ", "Total", "DIV Code", "SYS Code", "HelperForDynfilter"}, {{"Scope", each Text.Combine([Scope], " | "), type text}})
    in
    #"Grouped Rows"
    It is based on two queries. The first is the import from table of "IndexTbl", the second is the import of "DataTbl' which is then extended according to the expression above.
    Hope this helps.
    Gil

  • How To Create Table View With Same Column name But Different Table?

    Hi All,
    I have the problem to create a tableview with same column name but in different table.
    The Table that i have:-
    Table - PAC051MPROFORMA
    Column - mrn,visitid
    Table - PAC051TPROFORMA
    Column - mrn,visitid
    Table - PAC052MTRANSBILL
    Column - mrn,visitid
    Then i want to create a table view to view that table. This is my SQL
    CREATE VIEW pacviewproforma (mrn,visitid,mrn,visitid,mrn,visitid)
    As Select PAC051MPROFORMA.mrn,PAC051MPROFORMA.visitid,PAC051TPROFORMA.mrn,PAC051TPROFORMA.visitid,PAC052MTRANSBILL.mrn,PAC052MTRANSBILL.visitid
    where
    *(a.PAC051MPROFORMA.mrn=PAC051TPROFORMA.mrn)*
    and
    *(a.PAC051TPROFORMA.mrn=PAC052TRANSBILL.mrn)*
    That SQL Return this error = ORA-00957: duplicate column name
    Then I modify that SQL to
    CREATE VIEW pacviewproforma (mrn,visitid)
    As Select PAC051MPROFORMA.mrn,PAC051MPROFORMA.visitid,PAC051TPROFORMA.mrn,PAC051TPROFORMA.visitid,PAC052MTRANSBILL.mrn,PAC052MTRANSBILL.visitid
    where
    *(a.PAC051MPROFORMA.mrn=PAC051TPROFORMA.mrn)*
    and
    *(a.PAC051TPROFORMA.mrn=PAC052TRANSBILL.mrn)*
    This time this error return = ORA-01730: invalid number of column names specified
    What should i do?
    Thanks...

    Hi,
    SQL> CREATE VIEW pacviewproforma (mrn,visitid,mrn,visitid,mrn,visitid)
      2  As Select
      3  PAC051MPROFORMA.mrn,
      4  PAC051MPROFORMA.visitid,
      5  PAC051TPROFORMA.mrn,
      6  PAC051TPROFORMA.visitid,
      7  PAC052MTRANSBILL.mrn,
      8  PAC052MTRANSBILL.visitid
      9  from PAC051MPROFORMA,PAC051TPROFORMA,PAC052MTRANSBILL
    10  where
    11  (PAC051MPROFORMA.mrn=PAC051TPROFORMA.mrn)
    12  and
    13  (PAC051TPROFORMA.mrn=PAC052MTRANSBILL.mrn);
    CREATE VIEW pacviewproforma (mrn,visitid,mrn,visitid,mrn,visitid)
    ERROR at line 1:
    ORA-00957: duplicate column namePlease give different names to each column.
    Something like this..
    SQL> CREATE OR REPLACE VIEW pacviewproforma (MPROFORMA_mrn,MPROFORMA_visitid,TPROFORMA_mrn,TPROFORMA
    _visitid,MTRANSBILL_mrn,MTRANSBILL_visitid)
      2  As Select
      3  PAC051MPROFORMA.mrn,
      4  PAC051MPROFORMA.visitid,
      5  PAC051TPROFORMA.mrn,
      6  PAC051TPROFORMA.visitid,
      7  PAC052MTRANSBILL.mrn,
      8  PAC052MTRANSBILL.visitid
      9  from PAC051MPROFORMA,PAC051TPROFORMA,PAC052MTRANSBILL
    10  where
    11  (PAC051MPROFORMA.mrn=PAC051TPROFORMA.mrn)
    12  and
    13  (PAC051TPROFORMA.mrn=PAC052MTRANSBILL.mrn);
    View created.
    SQL> DESC  pacviewproforma;
    Name                                      Null?    Type
    MPROFORMA_MRN                                      NUMBER
    MPROFORMA_VISITID                                  NUMBER
    TPROFORMA_MRN                                      NUMBER
    TPROFORMA_VISITID                                  NUMBER
    MTRANSBILL_MRN                                     NUMBER
    MTRANSBILL_VISITID                                 NUMBER
    ORA-01730: invalid number of column names specifiedThe list of column nmae you specified during the CREATE VIEW should match with the SELECT list of the view.
    Twinkle

  • Table.Join/Merge in Power Query takes extremly long time to process for big tables

    Hi,
    I tried to simply merge/inner join two big tables(one has 300,000+ rows after filtering and the other has 30,000+ rows after filtering) in PQ. However, for this simple join operation, PQ took at least 10 minutes (I killed the Query Editor after 10
    minutes' processing) to load the preview.
    Here's how I did the join job: I first loaded tables into the workbook, then did the filtering for each table and at last, used the merge function to do the join based on a same field.
    Did I do anything wrong here? Or is there any way to improve the load efficiency?
    P.S. no custom SQL was used during the process. I was hoping the so called "Query Folding" can help speed the process, but it seems it didn't work here.
    Thanks.
    Regards,
    Qilong

    Hi!
    You should import the source tables
    in Access. This will speed up the work of
    PQ in several times.

  • How to create a dynamic RTF report which creates dynamic columns based on dynamic column selection from a table?

    Hi All,
    Suppose I have table, whose structure changes frequently on daily basis.
    For eg. desc my_table gives you following column name on Day 1
    SQL > desc my_table;
    Output
    Name
    Age
    Phone
    On Day 2, two more columns are added, viz, Address and Salary.
    SQL > desc my_table;
    Output
    Name
    Age
    Phone
    Address
    Salary
    Now I want to create an Dynnamic RTF report which would fetch data from ALL columns from my_table on daily basis. For that I have defined a concurrent program with XML as output type and have attached a data template/data definition to it which takes in XML as input and gives final output of conc program in EXCEL layout. I am able to do this for constant number of columns, but dont know how to do it when the number of columns to be displayed changes dynamically.
    For Day 1 my XML file should be like this.
    <?xml version="1.0" encoding="UTF-8"?>
    <dataTemplate name="XYZ" description="iExpenses Report" Version="1.0">
    <dataQuery>
    <sqlStatement name="Q2">
    <![CDATA[
    SELECT Name
    ,Age
    ,Phone
    FROM my_table
    ]]>
    </sqlStatement>
    </dataQuery>
    <dataStructure>
    <group name="G_my_table" source="Q2">
      <element name="Name" value="Name" />
      <element name="Age" value="Age" />
      <element name="Phone" value="Phone" />
    </group>
    </dataStructure>
    </dataTemplate>
    And my Day 1, EXCEL output from RTF template should be like this.
    Name     Age     Phone
    Swapnill     23     12345
    For Day 2 my XML file should be like this. With 2 new columns selected in SELECT clause.
    <?xml version="1.0" encoding="UTF-8"?>
    <dataTemplate name="XYZ" description="iExpenses Report" Version="1.0">
    <dataQuery>
    <sqlStatement name="Q2">
    <![CDATA[
    SELECT Name
    ,Age
    ,Phone
    ,Address
    ,Salary
    FROM my_table
    ]]>
    </sqlStatement>
    </dataQuery>
    <dataStructure>
    <group name="G_my_table" source="Q2">
      <element name="Name" value="Name" />
      <element name="Age" value="Age" />
      <element name="Phone" value="Phone" />
      <element name="Address" value="Address" />
      <element name="Salary" value="Salary" />
    </group>
    </dataStructure>
    </dataTemplate>
    And my Day 2, EXCEL output from RTF template should be like this.
    Name     Age     Phone     Address     Salary
    Swapnill     23     12345         Madrid     100000
    Now, I dont know below things.
    Make the XML dynamic as in on Day 1 there must be 3 columns in the SELECT statement and on Day 2, 5 columns. I want to create one dynamic XML which should not be required to be changed if new columns are added in my_table. I dont know how to create this query and also create their corresponding elements below.
    Make the RTF template dyanamic as in Day1 there must 3 columns in EXCEL output and on Day 2, 5 columns. I want to create a Dynamic RTF template which would show all the columns selected in Dynamic XML.I dont know how the RTF will create new XML tags and how it will know where to place it in the report. Means, I can create RTF template on Day 1, by loading XML data for 3 columns and placing 3 XML tags in template. But how will it create and place tags for new columns on Day 2?
    Hope, you got my requirement, its a challenging one. Please let me know how I can implement the required solution using RTF dynamically without any manual intervention.
    Regards,
    Swapnil K.
    Message was edited by: SwapnilK

    Hi All,
    I am able to fulfil above requirement. Now I am stuck at below point. Need your help!
    Is there any way to UPDATE the XML file attached to a Data Definition (XML Publisher > Data Definition) using a standard package or procedure call or may be an API from backend? I am creating an XML dynamically and I want to attach it to its Data Definition programmatically using SQL.
    Please let me know if there is any oracle functionality to do this.
    If not, please let me know the standard directories on application/database server where the XML files attached to Data Definitions are stored.
    For eg, /$APPL_TOP/ar/1.0/sql or something.
    Regards,
    Swapnil K.

  • Insert values to one table based on a value inserted into another table

    Hi,
    I've got a form based off a report which creates a new project. I've added an additional process to this form to insert four new values into another table as soon as the new project is created and the PK for that project is generated. This was working last week (of course!) and now seems to not work at all. It's complaining that the PK I was getting from my first insert was null. Here is one the the statements in my process I'm trying to run:
    insert into week_group values(week_group_seq.nextval, (SELECT trunc(NEXT_DAY(SYSDATE, 'FRIDAY')) FROM dual), 0, '', :P45_PROJECT_SEQ, sysdate, :APP_USER);
    The complaint I get that it's getting null where :P45_PROJECT_SEQ should be.
    Thoughts?
    Thanks,
    Jon

    Hi Andy,
    Thanks for the tip. Those two values didn't match and I updated them so they do and I'm still getting a "cannot insert NULL..." error.
    When I turn on debug I see that I'm getting the PK and I see the value. Here's my debug output:
    0.24: ...Process "Get PK": PLSQL (AFTER_SUBMIT) declare function get_pk return varchar2 is begin for c1 in (select PROJECT_SEQ.nextval next_val from dual) loop return c1.next_val; end loop; end; begin :P45_PROJECT_SEQ := get_pk; end;
    0.25: ...Session State: Saved Item "P45_PROJECT_SEQ" New Value="252"
    0.25: ...Process "Process Row of PROJECT": DML_PROCESS_ROW (AFTER_SUBMIT) #OWNER#:PROJECT:P45_PROJECT_SEQ:PROJECT_SEQ|IUD
    0.26: ...Session State: Save "P45_PROJECT_SEQ" - saving same value: "252"
    0.26: ...Process "reset page": CLEAR_CACHE_FOR_PAGES (AFTER_SUBMIT) 45
    0.27: Nulling cache for application "120" page: 45
    0.27: ...Process "Add Week Groups": PLSQL (AFTER_SUBMIT) insert into week_group values(week_group_seq.nextval, (SELECT trunc(NEXT_DAY(SYSDATE, 'FRIDAY')) FROM dual), 0, '', :P45_PROJECT_SEQ, sysdate, :APP_USER); insert into week_group values(week_group_seq.nextval, (SELECT trunc(NEXT_DAY(SYSDATE, 'FRIDAY') +
    0.28: Encountered unhandled exception in process type PLSQL
    0.28: Show ERROR page...
    0.28: Performing rollback...
    I notice that when it runs my process "Add Week Groups" it's not displaying all of the SQL. But the SQL is fine, it's right here:
    insert into week_group values(week_group_seq.nextval, (SELECT trunc(NEXT_DAY(SYSDATE, 'FRIDAY')) FROM dual), 0, '', :P45_PROJECT_SEQ, sysdate, :APP_USER);
    Hmmm....what about the "reset page" action in the last of the 0.26 lines?
    Thanks,
    Jon

  • Sorting A Table Based on single Column

    Hi,
    I wrote the piece of code to sort a webdynpro Table.
    wdContext.nodeKMAccess().sortElements(
         new Comparator()
         public int compare(Object x, Object y)
         int iX,iY;
         IPrivateDownloadKMHitsReportView.IKMAccessElement rx = (IPrivateDownloadKMHitsReportView.IKMAccessElement)x;
         IPrivateDownloadKMHitsReportView.IKMAccessElement ry = (IPrivateDownloadKMHitsReportView.IKMAccessElement)y;
         iX = Integer.parseInt(rx.getHits());
         iY = Integer.parseInt(ry.getHits());
         rx.getHits().compareTo(ry.getHits());     
         return (iY-iX);     
    But it sorts like
    11
    10
    8
    6
    3
    1
    2
    It sorts correctly, except for 2 values. Instead of 2 ,1... It gives 1,2. Where is the problem? how can i solve this isuue? Can anyone please help me in fixing this error?
    Regards,
    Divya

    Hi,
    try to change the comparator as follows
    public int compare ( Object obj1, Object obj2 ) 
       int i1 =  (  ( Integer ) obj1 ) .intValue (  ) ;
       int i2 =  (  ( Integer ) obj2 ) .intValue (  ) ;
       return Math.abs ( i1 )  - Math.abs ( i2 ) ;
    Regards
    Ayyapparaj

  • Compare two tables based on two columns

    Hi,
    my tables looks like this:
    Desc Table A (account)
    Account1_id
    Account2_id
    name,
    empid
    Table B (Bill )
    BillNo
    Advertiserid
    agencyid
    total vvalue
    I need to pick up total value from table B where the unique combination of advertiser-Agency id is the same as the given account1_id -Account2_id combination in table A for each employee id.
    In other words my output should be like
    Empid | Account_id (should be same as advertiserid)| Account2_id (same as agencyid) | sum(total_value) for this adv-agency combination.....
    objective: Get the total value from table B for each unique account1-account2 combination (advertiser-agency in other words) .
    I am not sure if I should use a coreelated subquery or how to handle the situation....Right now I am just checking the two columns separately like this:
    select.......from a,b
    where b.advertiser_id = a.account1_id and b.agencyid = b.account2id
    Is it correct to do so? I have a feeling that I am missing something if I join them seperately like this......Any advice on this?
    I am using Oracle 10g. Hope I am clear.Please let me know. Thankx in advance.

    Here you go:
    SQL> WITH bill AS
      2  (
      3          SELECT 1000 AS billno, 101 AS advertiserid, 102 AS agencyid, 5000 AS total_value FROM DUAL UNION ALL
      4          SELECT 1001, 101, 103, 5000 FROM DUAL UNION ALL
      5          SELECT 1002, 101, 102, 1000 FROM DUAL
      6  ), account AS
      7  (
      8          SELECT 101 AS account1_id, 102 AS account2_id, 'John' AS name, 50 AS empid FROM DUAL UNION ALL
      9          SELECT 101, 103, 'James', 40 FROM DUAL UNION ALL
    10          SELECT 101, 105, 'Joe', 60 FROM DUAL
    11  )
    12  /* End Sample Data */
    13  SELECT a.name
    14       , a.empid
    15       , a.account1_id
    16       , a.account2_id
    17       , SUM(b.total_value)
    18  FROM   account a
    19  JOIN   bill    b ON  a.account1_id = b.advertiserid
    20                   AND a.account2_id = b.agencyid
    21  GROUP BY a.name
    22         , a.empid
    23         , a.account1_id
    24         , a.account2_id
    25  ;
    NAME                 EMPID          ACCOUNT1_ID          ACCOUNT2_ID   SUM(B.TOTAL_VALUE)
    John                    50                  101                  102                 6000
    James                   40                  101                  103                 5000

Maybe you are looking for

  • Can we print header text in a tabular format...? urgent

    Hi, In the below program i need the text type payment history in a proper format.so plz help me regarding this. this a just a test program,you just execute this program.its running. i am feching the text payment history using the function module READ

  • Facebook albums not showing all photos in iPhoto

    I'm trying to export all of the photos (several hundred) from Facebook to iPhoto. Facebook is having difficulty creating an archive, and rather than wait for them to fix whatever that problem is I'm trying to import them directly using the Facebook c

  • Forbiding rendering html and body elements within jsf page

    Hi, I have noticed rendered JSF page always starts and ends with elements <html> <body> my content </body> </html> but I need page that contains only "my content" for inserting into live html document It can be this automatic elements rendering disab

  • WSIF JCA Execute of operation 'SynchRead' failed due to: The SSH API threw

    Hello All, I am getting the following error for SFTP Synchronize operation "WSIF JCA Execute of operation 'SynchRead' failed due to: The SSH API threw an exception". Have any of you experienced this error and how to fix this. thanks BPELDeveloper

  • Dataguard sync and applying archive log

    Just did a check on my primary and standby using: Primary: SELECT max(sequence#) AS "PRIMARY" FROM v$log_history; PRIMARY 45527 Standby: SELECT max(sequence#) AS "STANDBY", applied FROM v$archived_log GROUP BY applied: STANDBY APP 45476 NO 45528 YES