How to join 2 tables with unequal rows without resulting in a cartesian join

Hello,
This is the first time I have ever posted in any forum so please tell me if I should be doing this better.
Basically I have 2 tables with an unequal number of rows. For demonstration purposes, assume these are my 2 tables:
Table 1:
BaseKey
Letters
A
A
A
B
A
C
B
A
B
B
Table 2:
BaseKey
Numbers
A
1
A
2
B
1
B
2
B
3
I need to join them so that the data would appear like this
BaseKey
Letters
Numbers
A
A
1
A
B
2
A
C
null
B
A
1
B
B
2
B
null
3
Does anyone have any ideas how to do this using T-SQL without creating a cartesian join of 12 rows?
Thanks.

>> This is the first time I have ever posted in any forum so please tell me if I should be doing this better. <<
Please post DDL, so that people do not have to guess what the keys, constraints, Declarative Referential Integrity, data types, etc. in your schema are. Learn to follow ISO-11179 data element naming conventions and formatting rules. Temporal data should
use ISO-8601 formats. Code should be in Standard SQL as much as possible and not local dialect.
This is minimal polite behavior on SQL forums. What you did post is useless! Can you program from it? Neither can we. And we have to do all the extra typing for you.
CREATE TABLE Foo
(base_something  CHAR(1) NOT NULL,
 something_letter CHAR(1) NOT NULL,
 PRIMARY KEY (base_something, something_letter));
INSERT INTO Foo
VALUES ('A', 'A'),
('A', 'B'),
('A', 'C'),
('B', 'A'),
('B', 'B');
CREATE TABLE Bar
(CHAR(1) NOT NULL,
 something_digit CHAR(1) NOT NULL,
 PRIMARY KEY (base_something, something_digit));
INSERT INTO Foo
VALUES ('A', '1'),
('A', '2'),
('B', '1'),
('B', '2'),
('B', '3');
>> I need to join them so that the data would appear like this
base_something Letters Numbers <<
This looks like you are taking two decks of punch cards and merging them together, without any logical rules, just physical position in their decks relative to the base_something groups.
WITH Foo_Deck
AS
(SELECT base_something, something_letter,
       ROW_NUMBER()
        OVER (PARTITION BY base_something
               ORDER BY something_letter) AS card),
Bar_Deck
AS
(SELECT base_something, something_digit,
       ROW_NUMBER()
        OVER (PARTITION BY base_something
               ORDER BY something_digit) AS card),
SELECT F.base_something, F.something_digit, B.something_letter
  FROM Foo_Deck AS F
       LEFT OUTER JOIN
       Bar_Deck AS B
       ON B.base_something = F.base_something
          AND B.card = F.card;
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
in Sets / Trees and Hierarchies in SQL

Similar Messages

  • How to create table with resizable row ?

    how to create table with resizable row ?

    I'd suggest you start here:
    http://java.sun.com/docs/books/tutorial/uiswing/components/table.html

  • How to create table with 1 row 1MB in size?

    Hello,
    I am doing some R&D and want to create a Table with 1 row, which is 1 MB in size.
    i.e. I want to create a row which is 1 MB in size.
    I am using a 11g DB.
    I do this in SQL*Plus:
    (1.) CREATE TABLE onembrow  (pk NUMBER PRIMARY KEY, onembcolumn CLOB);
    (2.) Since 1MB is 1024*1024 bytes (i.e. 1048576 bytes) and since in English 1 letter = 1 byte, I do this
    SQL> INSERT INTO onembrow VALUES (1, RPAD('A', 1048576, 'B'));
    1 row created.
    (3.) Now, after committing, I do an analyze table.
    SQL> ANALYZE TABLE onembrow COMPUTE STATISTICS;
    Table analyzed.
    (4.) Now, I check the actual size of the table using this query.
    select segment_name,segment_type,bytes/1024/1024 MB
    from user_segments where segment_type='TABLE' and segment_name='ONEMBROW';
    SEGMENT_NAME       
    SEGMENT_TYPE        
    MB
    ONEMBROW           
    TABLE            
    .0625
    Why is the size only .0625 MB, when it should be 1 MB?
    Here is the DB Block related parameters:
    SELECT * FROM v$parameter WHERE upper(name) LIKE '%BLOCK%';
      NUM NAME                                                                                   TYPE VALUE 
      478 db_block_buffers                                                                          3 0     
      482 db_block_checksum                                                                         2 TYPICAL
      484 db_block_size                                                                             3 8192  
      682 db_file_multiblock_read_count                                                             3 128   
      942 db_block_checking                                                                         2 FALSE 
    What am I doing wrong here???

    When testing it is necessary to do something that is a reasonably realistic model of a problem you might anticipate appearing in a production system - a row of 1MB doesn't seem likely to be a useful source of information for "R&D on performance tuning"
    What's wrong with creating millions of rows ?
    Here's a cut and paste from a windows system running 11.2.0.3
    SQL> set timing on
    SQL>
    SQL> drop table t1 purge;
    Table dropped.
    Elapsed: 00:00:00.04
    SQL>
    SQL> create table t1
      2  nologging
      3  as
      4  with generator as (
      5     select
      6             rownum id
      7     from dual
      8     connect by
      9             level <= 50
    10  ),
    11  ao as (
    12     select
    13             *
    14     from
    15             all_objects
    16     where   rownum <= 50000
    17  )
    18  select
    19     rownum          id,
    20     ao.*
    21  from
    22     generator       v1,
    23     ao
    24  ;
    Table created.
    Elapsed: 00:00:07.09
    7 seconds to generate 2.5M rows doesn't seem like a problem.  For a modelling example I have one script that generates 6.5M (carefully engineered) rows, with a couple of indexes and a foreign key or two, then collects stats (no histograms) in 3.5 minutes.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Now on Twitter: @jloracle

  • How to create table with dynamic rows

    Hi Ppl,
    I have an array lets say with length of n. I want to creata table with 2 columns and no of rows = array length.
    lets say array length is 3 ( array[0] = 1, array[1] = 2, array[2] = 3) so the table should have 2 columns and 3 rows.
    After creation of table I want to display each array value in each row.
    Can somebody help me in this asap.
    Thanks \
    Ashish

    Hi, Thanks for reply... actually this is one part of rtf. I have put values from xml to an array and now I want to create a table with no of rows = length of array. so here xml will not be useful. could you pls think of it without xml.
    result is like
    lets says below is the array
    array[0] = a
    array[1] = b
    array[2] = c
    array length = 3
    so there should be a table of 2 coulmns and 3 rows. Second column of first row will show 'a', Second column of second row wil show 'b' and Second column of third row will show 'c'.
    Hope this is useful.
    Thakns
    Ashish

  • How to insert a table with variable rows in smart form

    Hi all,
    How to insert a table with variable rows in smart form?
    Any help would be appreciated.
    Regards,
    Mahesh.

    Hi,
    Right click the mouse->create->table
    If you want 5 columns, you need to declare 5 cells in one line type of the table
    Click on Table -> Details, then do the following
    Line Type 1 2 3 4 5
    L1 2mm 3mm etc
    Here specify the width of the columns as many as you want..
    then in the header/main area of the table, click create Table Line, Rowtype is L1, automatically 5 cells will come,In each cell create a text element, display the variable to be printed there.

  • How to create table with row type in smart forms

    How to create table with row type in smart forms with out line type
    please explain me the procedure

    HI,
    A table type describes the structure and functional attributes of an internal table in ABAP. In ABAP programs you can reference a table type TTYP defined in the ABAP Dictionary with the command DATA <inttab> TYPE TTYP. An internal table <inttab> is created in the program with the attributes defined for TTYP in the ABAP Dictionary.
    A table type is defined by:
    its line type, that defines the structure and data type attributes of a line of the internal table
    the options for managing and accessing the data ( access mode) in the internal table
    the key ( key definition and key category) of the internal table
    The row type is defined by directly entering the data type, length and number of decimal places or by referencing a data element, structured type ( structure, table or view) or other table type. Or the row type can be a reference type.
    <b>for more info :</b> http://help.sap.com/saphelp_nw2004s/helpdata/en/fc/eb366d358411d1829f0000e829fbfe/content.htm
    Internal table
    Regards
    Sudheer

  • How to constuct a table with each row having a button to select

    Hi,
    Imagine i have 5 orders and i want to select one order to view or update in other page.
    How i construct a table with each row having a button to select it.
    I'm having a hard time to do this. How i select a backingBean method in Javascript?.
    Thanks

    I'm trying to put a <h:commandButton> inside <h:dataTable> like this:
                    <h:dataTable id="books" value="#{BuscarLibros.booksSearched}" var="book">
                        <h:column>
                            <f:facet name="header">
                                <h:outputText value="#{messages.title}"/>
                            </f:facet>
                        </h:column>
                        <h:column>
                            <f:facet name="header">
                                <h:outputText value="#{messages.isbn}"/>
                            </f:facet>
                            <h:outputText value="#{book.isdn}"/>
                        </h:column>
                        <h:column>
                            <h:outputFormat value="#{book.fechaPublicacion}">
                                <f:convertDateTime pattern="ddd/MM/yyyy"/>
                            </h:outputFormat>
                            <h:commandButton action="#{BuscarLibros.prepareUpdateBook}" value="Hello WORLD"/>
                        </h:column>
                    </h:dataTable>but it does not call the method.
    If it is outside it calls but inside not. Its strange because Duke's bookstore example from Sun Java EE is like that.
    Any clues?

  • How to know the the list of tables with no rows inside a schema?

    with reference to
    http://www.ss64.com/orad/USER_TABLES.html
    How to know the the list of tables with no rows inside a schema?
    I try this select table_name from user_tables where num_rows=0;
    I can found one table that is empty.
    So what's the query to return list of tables in a schema which has no rows?
    thanks

    You can do that only if your have collected the stats properly. Otherwise its going to show you wrong information.
    Check this out...
    SQL> drop table t
      2  /
    Table dropped.
    SQL> create table t
      2  as
      3  select level no from dual connect by level <=100
      4  /
    Table created.
    SQL> select table_name,num_rows from user_tables where table_name = 'T'
      2  /
    TABLE_NAME                       NUM_ROWS
    T
    SQL> begin
      2      dbms_stats.gather_table_stats(user,'T');
      3  end;
      4  /
    PL/SQL procedure successfully completed.
    SQL> select table_name,num_rows from user_tables where table_name = 'T'
      2  /
    TABLE_NAME                       NUM_ROWS
    T                                     100
    SQL>Thanks,
    Karthick.

  • Select on table with 1800 rows is slow

    I have a table with 1800 rows. Each entry has a geometry position and a geometry polygon around the position. I am using the polygon to detect which (other) entries are near the current entry.
    In the following testdata and the subsequent query, i am filtering on 625 (of 1865) rows, and then using the .STContains-method to finding other rows (the testdata is fully found by this query, in the live database the values are not so regular as in the testdata.
    The query take 6500 ms. In the live database, only 800 records are (yet) in the table, and it takes 2200 ms. 
    select SlowQueryTable.id
    from SlowQueryTable
    inner join dbo.SlowQueryTable as SlowQueryTableSeen
    on SlowQueryTable.[box].STContains(SlowQueryTableSeen.position) = 1
    where SlowQueryTable.userId = 2
    (The query in the live system is even more complex, but this is main part of it and even simplified as it is just takes too long).
    This script generates test data and runs the query:
    -- The number table is just needed to generate test data
    CREATE TABLE [dbo].[numbers](
    [number] [int] NOT NULL
    go
    declare @t table (number int)
    insert into @t select 0 union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9
    insert into numbers
    select * from
    select
    t1.number + t2.number*10 + t3.number*100 + t4.number*1000 as x
    from
    @t as t1,
    @t as t2,
    @t as t3,
    @t as t4
    ) as t1
    order by x
    go
    -- this is the table which has the slow query. The Columns [userId], [position] and [box] are the relevant ones
    CREATE TABLE [dbo].SlowQueryTable(
    [id] [int] IDENTITY(1,1) NOT NULL,
    [userId] [int] NOT NULL,
    [position] [geometry] NOT NULL,
    [box] [geometry] NULL,
    constraint SlowQueryTable_primary primary key clustered (id)
    create nonclustered index SlowQueryTable_UserIdKey on [dbo].SlowQueryTable(userId);
    --insert testdata: three users with each 625 entries. Each entry per user has its unique position, and a rectangle (box) around it.
    -- In the database in question, the positions are a bit more random, often tens of entries have the same position. The slow query is nevertheless visible with these testdata
    declare @range int;
    set @range = 5;
    INSERT INTO [dbo].SlowQueryTable (userId,position,box)
    select
    users.number,
    geometry::STGeomFromText('POINT (' + convert(varchar(15), X) + ' ' + convert(varchar(15), Y) + ')',0),
    geometry::STPolyFromText('POLYGON ((' + convert(varchar(15), X - @range) + ' ' + convert(varchar(15), Y - @range) + ', '
    + convert(varchar(15), X + @range) + ' ' + convert(varchar(15), Y - @range) + ', '
    + convert(varchar(15), X + @range) + ' ' + convert(varchar(15), Y + @range) + ', '
    + convert(varchar(15), X - @range) + ' ' + convert(varchar(15), Y + @range) + ','
    + convert(varchar(15), X - @range) + ' ' + convert(varchar(15), Y - @range) + '))', 0)
    from (
    select
    (numberX.number * 40) + 4520 as X
    ,(numberY.number * 40) + 4520 as Y
    from numbers as numberX
    cross apply numbers as numberY
    where numberX.number < (1000 / 40)
    and numberY.number < (1000 / 40)) as positions
    cross apply numbers as users
    where users.number < 3
    CREATE SPATIAL INDEX [SlowQueryTable_position]
    ON [dbo].SlowQueryTable([position])
    USING GEOMETRY_GRID
    WITH (
    BOUNDING_BOX = ( 4500, 4500, 5500, 5500 ),
    GRIDS =(LEVEL_1 = HIGH,LEVEL_2 = HIGH,LEVEL_3 = HIGH,LEVEL_4 = HIGH),
    CELLS_PER_OBJECT = 64, PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    go
    ALTER INDEX [SlowQueryTable_position] ON [dbo].SlowQueryTable
    REBUILD;
    go
    CREATE SPATIAL INDEX [SlowQueryTable_box]
    ON [dbo].SlowQueryTable(box)
    USING GEOMETRY_GRID
    WITH ( BOUNDING_BOX = ( 4500, 4500, 5500, 5500 ) ,
    GRIDS =(LEVEL_1 = HIGH,LEVEL_2 = HIGH,LEVEL_3 = HIGH,LEVEL_4 = HIGH),
    CELLS_PER_OBJECT = 64, PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    go
    ALTER INDEX [SlowQueryTable_box] ON [dbo].SlowQueryTable
    REBUILD;
    go
    SET STATISTICS IO ON
    SET STATSTICS TIME ON
    -- this is finally the query. it takes about 6500 ms
    select SlowQueryTable.id
    into #t1
    from SlowQueryTable
    inner join dbo.SlowQueryTable as SlowQueryTableSeen
    on SlowQueryTable.[box].STContains(SlowQueryTableSeen.position) = 1
    --on SlowQueryTable.position.STDistance(SlowQueryTableSeen.position) < 5
    where SlowQueryTable.userId = 2
    drop table #t1
    drop table SlowQueryTable
    drop table numbers
    Using an explicit index hint does do the job, but then the query gets slow if i change the where clause:
    select SlowQueryTable.id
    into #t1
    from SlowQueryTable
    with (index([SlowQueryTable_box]))
    inner join dbo.SlowQueryTable as SlowQueryTableSeen
    on SlowQueryTable.[box].STContains(SlowQueryTableSeen.position) = 1
    where SlowQueryTable.userId = 2
    leads to 600ms, and changing the where clause
    where SlowQueryTable.id = 100
    slows it again down to 1200ms.  Filtering on ID get massively slowed down when using index hint on the spatial index.
    Since the table in the live system will grow to 10000+ rows, and the query is called often by users, I badly need a more efficient query.
    Do I have to create a different queries for each use-case, some with index hints and some without?

    I've run your example and can confirm your results. There's a couple of things that I noticed though.
    After looking at query plans, it's not a matter of "with spatial index" vs. "without spatial index". You have two spatial indexes, one on each column (position and box). When you don't hint the "box" spatial index, the query
    uses the "position" spatial index. Because of what they are indexing (points vs. polygons), the "box" spatial index requires a lot more IO. With some (non-spatial) predicates, the "box" spatial index gives better performance,
    with others the "position" one does. I've yet to figure out exactly why (short on time, I might get back to it in future), but you can examine query plans and use the spatial index diagnostic procs (e.g. sp_help_spatial_geometry_index_xml ) in
    addition to the diagnostics you're running to see why and if you can find a better performing plan/index.
    Bear this in mind. Given a choice of multiple spatial indexes, the SQL Server query optimizer is not able to choose (for the most part, IO etc. aside), which one is best. Also, there is usually only one choice of spatial query plan shape, in general. If
    your query is more complex than the one in your example, you might benefit by breaking it in two: one query to filter out all the rows and predicates that don't use a spatial index and one query that uses the spatial index on the subset. I've had good
    luck with this other situations with complex queries involving spatial predicates. This method may not be applicable to a spatial query as simple as the one in your example, however.
    Hope this helps, Bob 

  • TO DRAW A TABLE WITH MULTIPLE ROWS AND MULTIPLE COLOUMNS IN FORM

    Hi,
       How to draw a table with multiple rows and columns seperated by lines in form printing?

    check this
    http://sap-img.com/ts003.htm
    Regards
    Prabhu

  • Join table with additional data

    Hi,
    I'm new to JPA (Toplink Essentials) and I'd like to know how to handle this situation:
    I have ORDER table, ITEMS table and "join table" ORDER_ITEMS which holds refernces to ordered items with quantity of each oredered item.
    Many thanks

    I did mapping as shown in listing. It works for reading, but for writing JPA returns error. Can somebody help?
    Thanks
    [Microsoft][SQLServer 2000 Driver for JDBC][SQLServer]Column name 'idexpedice' appears more than once in the result column list.
    Error Code: 264
    Call: INSERT INTO vklad_expedice (IDEXPEDICE, IDVKLAD, VAHA, idexpedice, idvklad) VALUES (?, ?, ?, ?, ?)
    bind => [null, null, 0, 4411, 2]
    @Entity
    public class Expedice implements Serializable {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer idexpedice;
    @Temporal(TemporalType.DATE)
    private Date datum;
    @OneToMany(fetch = FetchType.EAGER, mappedBy="expedice", cascade = CascadeType.ALL)
    private List<ExpediceVklad> vklady;
    @Entity
    public class Vklad implements Serializable {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Integer idvklad;
    private String nazev;
    @Entity
    @Table(name = "vklad_expedice")
    @IdClass(ExpediceVkladId.class)
    public class ExpediceVklad implements Serializable {
    @Id
    private Integer idexpedice;
    @Id
    private Integer idvklad;
    private Integer vaha;
    @ManyToOne
    @JoinColumn(name = "idexpedice")
    private Expedice expedice;
    @ManyToOne
    @JoinColumn(name = "idvklad")
    private Vklad vklad;
    public class ExpediceVkladId {
    @Id
    private Integer idexpedice;
    @Id
    private Integer idvklad;
    SQL tables
    EXPEDICE
    idexpedice
    datum
    VKLAD
    idvklad
    nazev
    VKLAD_EXPEDICE (this is the join table with additional column)
    idexpedice
    idvklad
    vaha
    Edited by: user10933983 on 31.3.2009 13:47

  • How can I pair with Apple TV without going to set up on Apple TV?

    How can I pair with Apple TV without going to set up on Apple TV?

    Sign up for what? All you need is airplay enabled on ATV, then when iPad is on the same network you double tap home button, swipe (left to right motion) and you will bring up the volume/brightness controls, along with airplay. Tap on airplay (rectangle with triangle in it) and select ATV, toggle ON mirroring.
    The only way to mirror the iPad is wirelessly through ATV or by purchasing the adapter and connecting directly to your TV. Bluetooth via your Blu-Ray player won't help with this.
    http://support.apple.com/kb/TS4085

  • Import the table with 0 rows

    Hi
    I have a problem to import a dump that contains the tables with 0 rows.
    When i exported from ORACLE 11.2 64 bit on SERVER 2008 i noticed that log didn't confirm the tables with 0 rows.
    When i want to import to ORACLE 11.2 64 bit on other SERVER 2008 i have a lot of errors on this tables with 0 rows.
    In the log i get the same tables with 1 row at least, but no one with 0 rows.
    I open my dump in TEXTPAD and i see it contains "CREATE ....." these tables.
    I don't understand why it happens. I used FUll DUMP by SYS, it didn't help.
    This is not first time when i export and import dumps,no errors.
    I'm using command "EXP" and "IMP" and every time it's ok.(IF it's a releavent)
    Why it happens? any solutions for this issue?
    Thanks

    I've found (i guess so) solution to this issue
    here are two links to this new feature that is called deffered segment creation
    The reason for this behavior is 11.2 new feature ‘Deferred Segment Creation‘ – the creation of a table sent is deferred until the first row is inserted.
    As a result, empty tables are not listed in dba_segments and are not exported by exp utility
    http://www.nativeread.com/2010/04/09/11gr2-empty-tables-skipped-by-export-deferred-segment-creation/
    http://antognini.ch/2010/10/deferred-segment-creation-as-of-11-2-0-2/
    And this is i've found in official documentation from oracle
    Beginning in Oracle Database 11g Release 2, when creating a non-partitioned heap-organized table in a locally managed tablespace, table segment creation is deferred until the first row is inserted. In addition, creation of segments is deferred for any LOB columns of the table, any indexes created implicitly as part of table creation, and any indexes subsequently explicitly created on the table.The advantages of this space allocation method are the following:A significant amount of disk space can be saved for applications that create hundreds or thousands of tables upon installation, many of which might never be populated.Application installation time is reduced.There is a small performance penalty when the first row is inserted, because the new segment must be created at that time.
    To enable deferred segment creation, compatibility must be set to '11.2.0' or higher. You can disable deferred segment creation by setting the initialization parameter DEFERRED_SEGMENT_CREATION to FALSE. The new clauses SEGMENT CREATION DEFERRED and SEGMENT CREATION IMMEDIATE are available for the CREATE TABLE statement. These clauses override the setting of the DEFERRED_SEGMENT_CREATION initialization parameter.
    +Note that when you create a table with deferred segment creation (the default), the new table appears in the _TABLES views, but no entry for it appears in the SEGMENTS views until you insert the first row. There is a new SEGMENTCREATED column in _TABLES, _INDEXES, and _LOBS that can be used to verify deferred segment creation+
    Note:
    The original Export utility does not export any table that was created with deferred segment creation and has not had a segment created for it. The most common way for a segment to be created is to store a row into the table, though other operations such as ALTER TABLE ALLOCATE EXTENTS will also create a segment. If a segment does exist for the table and the table is exported, the SEGMENT CREATION DEFERRED clause will not be included in the CREATE TABLE statement that is executed by the original Import utility.

  • Filtering a table with 19699 rows

    Hello
    I am new to Xcelsisu and I am trying to aply filter component to a table with 19699 rows. I is suppose to show in the filter 45 diferent clasifications. I seems to be using only the first 500 rows to aply the filter. I changed the maximum number of rows in the preferences. What should i do ?

    a component cannot handle 20.000 rows you should filter these in the database by using quries when getting the data in the dashboard. If there is no database connection available, you can use pivot tables to create list for the different dimensions and then use lookup formulas like match and index o display the data for your chart.

  • Editable table with multiple rows

    Hi All!
    We're trying to develop some application in VC 7.0. That application should read data from some R/3 tables (via standard and custom functional modules), then display data to user and allow him/her to modify it (or add some data into table rows), and then save it back to R/3.
    What is the best way to do that?
    There's no problem with displaying data.
    But when I try to add something to table (on portal interface), I'm able to use only first row of the table... Even if I fill all fields of that row, I'm not able to add data to second row, etc.
    Second question. Is it possible to display in one table contents of output of several BAPIs? For example we have three bapis, one displaying user, second displays that user's subordinates, and the third one - that user's manager. And we want one resulting table...
    And last. What is the best way to submit data from table view in VC (portal) to R/3 table? I understand that we should write some functional module, that puts data to R/3. I'm asking about what should be done in VC itself. Some button, or action...
    Any help will be appreciated and rewarded :o)
    Regards,
    DK

    Here are some former postings:
    Editable table with multiple rows
    and
    Editable table with multiple rows
    Are you on the right SP-level?
    Can you also split up your posting: one question, one posting? This way you get faster answers, as people might just browse the headers.

Maybe you are looking for

  • Problem with displaying a report in browser

    I have used this code to generate report but when the browser opens its gives me the message REP-52251: Cannot get output of job ID 4260 you requested on Sat Mar 26 11:03:16 PKT 2005.<P>REP-56033: Job 4260 does not exist here is the code that used PR

  • How to back up music?

    I am sure this is a dumb question, but how can I back up my music other the on a CD-R disc? I want to be able to do it in high volume and a CD won't let me do that. Any help?

  • Oracle 11g Case Insensitive Issue

    Hi, For our .net framework 4.0 Application we need case insenstive search. So I have tried setting NLS_COMP=LINGUISTIC & NLS_SORT=BINARY_CI in environmental variable.. But this is not working. But when I tried firing trigger on each logon and setting

  • PDA DAQ channels swapping

    Hi everyone. I'm experiencing problems with a PDA based app collecting analog data from 6 channels. All advice appreciated. Hardware is a Juniper Allegro CX, running Windows CE.net 4.20, Intel ARM X-scale processor, 128Mb RAM, NI DAQCard 6024E in exp

  • 0BWTC_C02 - Viewing number of query executions and refreshes

    Hi, I having been testing some queries I created for infocube 0BWTC_C02.  Is there any way to limit the key figure 0TCTNAVCTR to only show navigations that were query refreshes (e.g. ignoring navigations such as removing/adding a drilldown, results r