DELETE QUERY FOR A TABLE WITH MILLION ROWS

Hello,
I have a requirement where I have to compare 2 tables - both having around million rows, and delete data based on a single column.
DELETE FROM TABLE_A WHERE COLUMN_A NOT IN
(SELECT COLUMN_A FROM TABLE_B)
COLUMN_A had index defined on it in both tables. Still it is taking a long time. What is the best way to achieve this? any work around?
thanks

How many rows are you deleting from this table ? If the precentage is large then the better option is
1) Create a new table where COLUMN_A NOT IN
(SELECT COLUMN_A FROM TABLE_B)
2) TRUNCATE table_A
3)Insert in to table_A (select * from new table)
4) If you have any constraints then may be it can be diaabled.
thanks

Similar Messages

  • Counting rows for db tables with 500 million+ entries

    Hi
    SE16 transaction timesout in foreground when showing number of entries for db tables with 500million+ entries. In background this takes too long.
    I am writing a custom report to get number of records of a table and using OPEN CURSOR concept to determine number of records.
    Is there any other efficient way to read number of records from such huge tables.
            OPEN CURSOR l_cursor FOR
            SELECT COUNT(*)
               FROM (u_str_param-p_tabn) WHERE (l_tab_cond).  " u_str_param-p_tabn is the table name in input and l_tab_cond is a
              DO.                                                                              " dynamic where condition
                FETCH NEXT CURSOR l_cursor INTO l_new_count.
               PACKAGE SIZE p_pack.
                IF sy-subrc NE 0.
                  EXIT.
                ELSE.
                  l_tot_cnt = l_tot_cnt + l_new_count. " l_tot_cnt will contain number of records at end of loops
                  CLEAR l_new_count.
                ENDIF.
              ENDDO.
              CLOSE CURSOR l_cursor.

    Hello,
    For sure it is a huge number of entries!!!
    Is any key-fields?
    Use a variable to keep a counter of the key-fields , for example row_table and also a low and high variable .
    Try to use a do-loop and in this loop use:
    do .
    low_row  = row_table.
    low_high = row_table + 200.000.:increse the select every 200.000 entries
    Do  "select statement into table"  based on the key-fields 
    For example : if the key-field is the docnr,
    Do a "select into table itab where docnr >low_row
                                            and docnr < low_high"
    Count the lines of itab and keep them in a variable.
    free the itab.
    enddo
    Good luck.
    Antonis

  • How to create table with 1 row 1MB in size?

    Hello,
    I am doing some R&D and want to create a Table with 1 row, which is 1 MB in size.
    i.e. I want to create a row which is 1 MB in size.
    I am using a 11g DB.
    I do this in SQL*Plus:
    (1.) CREATE TABLE onembrow  (pk NUMBER PRIMARY KEY, onembcolumn CLOB);
    (2.) Since 1MB is 1024*1024 bytes (i.e. 1048576 bytes) and since in English 1 letter = 1 byte, I do this
    SQL> INSERT INTO onembrow VALUES (1, RPAD('A', 1048576, 'B'));
    1 row created.
    (3.) Now, after committing, I do an analyze table.
    SQL> ANALYZE TABLE onembrow COMPUTE STATISTICS;
    Table analyzed.
    (4.) Now, I check the actual size of the table using this query.
    select segment_name,segment_type,bytes/1024/1024 MB
    from user_segments where segment_type='TABLE' and segment_name='ONEMBROW';
    SEGMENT_NAME       
    SEGMENT_TYPE        
    MB
    ONEMBROW           
    TABLE            
    .0625
    Why is the size only .0625 MB, when it should be 1 MB?
    Here is the DB Block related parameters:
    SELECT * FROM v$parameter WHERE upper(name) LIKE '%BLOCK%';
      NUM NAME                                                                                   TYPE VALUE 
      478 db_block_buffers                                                                          3 0     
      482 db_block_checksum                                                                         2 TYPICAL
      484 db_block_size                                                                             3 8192  
      682 db_file_multiblock_read_count                                                             3 128   
      942 db_block_checking                                                                         2 FALSE 
    What am I doing wrong here???

    When testing it is necessary to do something that is a reasonably realistic model of a problem you might anticipate appearing in a production system - a row of 1MB doesn't seem likely to be a useful source of information for "R&D on performance tuning"
    What's wrong with creating millions of rows ?
    Here's a cut and paste from a windows system running 11.2.0.3
    SQL> set timing on
    SQL>
    SQL> drop table t1 purge;
    Table dropped.
    Elapsed: 00:00:00.04
    SQL>
    SQL> create table t1
      2  nologging
      3  as
      4  with generator as (
      5     select
      6             rownum id
      7     from dual
      8     connect by
      9             level <= 50
    10  ),
    11  ao as (
    12     select
    13             *
    14     from
    15             all_objects
    16     where   rownum <= 50000
    17  )
    18  select
    19     rownum          id,
    20     ao.*
    21  from
    22     generator       v1,
    23     ao
    24  ;
    Table created.
    Elapsed: 00:00:07.09
    7 seconds to generate 2.5M rows doesn't seem like a problem.  For a modelling example I have one script that generates 6.5M (carefully engineered) rows, with a couple of indexes and a foreign key or two, then collects stats (no histograms) in 3.5 minutes.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Now on Twitter: @jloracle

  • Query for Empty Tables.

    Hi All,
    Can someone tell me a Query to find out "EMPTY TABLES i.e. Tables having no Rows" in Oracle 8i ?
    I tried on Google, but results are not satisfactory, may be due to 8i version.
    Please guide me.
    Regards.

    982164 wrote:
    Hi Hoek,
    Actually we upgraded our database to 11g.
    We imported our data from 8i to 11gr2 by the utility IMP successfully.
    It imported our data 100% i.e. when we import 8i dmp data into 11g, there is never problem.
    Problem is arises now when we further use IMP/EXP utility in 11g.So everything in Oracle 8 is working the way you expect. The fact that you also have an Oracle 8 database has nothing to do with this problem. Is that what you're saying?
    When we run EXP utlity in 11g, it doesn't export empty tables.If the problem is entirely in Oracle 11, then you should be able to use something like XMLQUERY to find the 0-row tables, if you really need to. (I think you don't.)
    EXP can export tables with 0 rows. I just tried it in Oracle 11, and part of the feedback I got was:
    . . exporting table                       ZIPCODES          0 rows exported
    . . exporting table                  ZONEORDER_TBL          5 rows exported
    . . exporting table                       ZONE_TBL          6 rows exported
    . . exporting table                            ZOO          2 rows exported
    . . exporting table                    Z_HIERARCHY          3 rows exported
    . . exporting table                         Z_TEST          3 rows exported
    . . exporting table                        Z_TEST4          0 rows exportedIf it's not exporting 0-row tables for you, find out why, and fix that problem. Post your exp control file.
    Someone told me to insert atleast 1 row in the empty tables, then they will be exported.And then, after importing them, TRUNCATE those tables? That's a lot of work.
    That's why I m searching of the Empty Tables.
    Am I going right way ?I don't think so.
    Find and fix the problem with exp.
    If you can't get exp to work, then look into data pump.

  • Select on table with 1800 rows is slow

    I have a table with 1800 rows. Each entry has a geometry position and a geometry polygon around the position. I am using the polygon to detect which (other) entries are near the current entry.
    In the following testdata and the subsequent query, i am filtering on 625 (of 1865) rows, and then using the .STContains-method to finding other rows (the testdata is fully found by this query, in the live database the values are not so regular as in the testdata.
    The query take 6500 ms. In the live database, only 800 records are (yet) in the table, and it takes 2200 ms. 
    select SlowQueryTable.id
    from SlowQueryTable
    inner join dbo.SlowQueryTable as SlowQueryTableSeen
    on SlowQueryTable.[box].STContains(SlowQueryTableSeen.position) = 1
    where SlowQueryTable.userId = 2
    (The query in the live system is even more complex, but this is main part of it and even simplified as it is just takes too long).
    This script generates test data and runs the query:
    -- The number table is just needed to generate test data
    CREATE TABLE [dbo].[numbers](
    [number] [int] NOT NULL
    go
    declare @t table (number int)
    insert into @t select 0 union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9
    insert into numbers
    select * from
    select
    t1.number + t2.number*10 + t3.number*100 + t4.number*1000 as x
    from
    @t as t1,
    @t as t2,
    @t as t3,
    @t as t4
    ) as t1
    order by x
    go
    -- this is the table which has the slow query. The Columns [userId], [position] and [box] are the relevant ones
    CREATE TABLE [dbo].SlowQueryTable(
    [id] [int] IDENTITY(1,1) NOT NULL,
    [userId] [int] NOT NULL,
    [position] [geometry] NOT NULL,
    [box] [geometry] NULL,
    constraint SlowQueryTable_primary primary key clustered (id)
    create nonclustered index SlowQueryTable_UserIdKey on [dbo].SlowQueryTable(userId);
    --insert testdata: three users with each 625 entries. Each entry per user has its unique position, and a rectangle (box) around it.
    -- In the database in question, the positions are a bit more random, often tens of entries have the same position. The slow query is nevertheless visible with these testdata
    declare @range int;
    set @range = 5;
    INSERT INTO [dbo].SlowQueryTable (userId,position,box)
    select
    users.number,
    geometry::STGeomFromText('POINT (' + convert(varchar(15), X) + ' ' + convert(varchar(15), Y) + ')',0),
    geometry::STPolyFromText('POLYGON ((' + convert(varchar(15), X - @range) + ' ' + convert(varchar(15), Y - @range) + ', '
    + convert(varchar(15), X + @range) + ' ' + convert(varchar(15), Y - @range) + ', '
    + convert(varchar(15), X + @range) + ' ' + convert(varchar(15), Y + @range) + ', '
    + convert(varchar(15), X - @range) + ' ' + convert(varchar(15), Y + @range) + ','
    + convert(varchar(15), X - @range) + ' ' + convert(varchar(15), Y - @range) + '))', 0)
    from (
    select
    (numberX.number * 40) + 4520 as X
    ,(numberY.number * 40) + 4520 as Y
    from numbers as numberX
    cross apply numbers as numberY
    where numberX.number < (1000 / 40)
    and numberY.number < (1000 / 40)) as positions
    cross apply numbers as users
    where users.number < 3
    CREATE SPATIAL INDEX [SlowQueryTable_position]
    ON [dbo].SlowQueryTable([position])
    USING GEOMETRY_GRID
    WITH (
    BOUNDING_BOX = ( 4500, 4500, 5500, 5500 ),
    GRIDS =(LEVEL_1 = HIGH,LEVEL_2 = HIGH,LEVEL_3 = HIGH,LEVEL_4 = HIGH),
    CELLS_PER_OBJECT = 64, PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    go
    ALTER INDEX [SlowQueryTable_position] ON [dbo].SlowQueryTable
    REBUILD;
    go
    CREATE SPATIAL INDEX [SlowQueryTable_box]
    ON [dbo].SlowQueryTable(box)
    USING GEOMETRY_GRID
    WITH ( BOUNDING_BOX = ( 4500, 4500, 5500, 5500 ) ,
    GRIDS =(LEVEL_1 = HIGH,LEVEL_2 = HIGH,LEVEL_3 = HIGH,LEVEL_4 = HIGH),
    CELLS_PER_OBJECT = 64, PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    go
    ALTER INDEX [SlowQueryTable_box] ON [dbo].SlowQueryTable
    REBUILD;
    go
    SET STATISTICS IO ON
    SET STATSTICS TIME ON
    -- this is finally the query. it takes about 6500 ms
    select SlowQueryTable.id
    into #t1
    from SlowQueryTable
    inner join dbo.SlowQueryTable as SlowQueryTableSeen
    on SlowQueryTable.[box].STContains(SlowQueryTableSeen.position) = 1
    --on SlowQueryTable.position.STDistance(SlowQueryTableSeen.position) < 5
    where SlowQueryTable.userId = 2
    drop table #t1
    drop table SlowQueryTable
    drop table numbers
    Using an explicit index hint does do the job, but then the query gets slow if i change the where clause:
    select SlowQueryTable.id
    into #t1
    from SlowQueryTable
    with (index([SlowQueryTable_box]))
    inner join dbo.SlowQueryTable as SlowQueryTableSeen
    on SlowQueryTable.[box].STContains(SlowQueryTableSeen.position) = 1
    where SlowQueryTable.userId = 2
    leads to 600ms, and changing the where clause
    where SlowQueryTable.id = 100
    slows it again down to 1200ms.  Filtering on ID get massively slowed down when using index hint on the spatial index.
    Since the table in the live system will grow to 10000+ rows, and the query is called often by users, I badly need a more efficient query.
    Do I have to create a different queries for each use-case, some with index hints and some without?

    I've run your example and can confirm your results. There's a couple of things that I noticed though.
    After looking at query plans, it's not a matter of "with spatial index" vs. "without spatial index". You have two spatial indexes, one on each column (position and box). When you don't hint the "box" spatial index, the query
    uses the "position" spatial index. Because of what they are indexing (points vs. polygons), the "box" spatial index requires a lot more IO. With some (non-spatial) predicates, the "box" spatial index gives better performance,
    with others the "position" one does. I've yet to figure out exactly why (short on time, I might get back to it in future), but you can examine query plans and use the spatial index diagnostic procs (e.g. sp_help_spatial_geometry_index_xml ) in
    addition to the diagnostics you're running to see why and if you can find a better performing plan/index.
    Bear this in mind. Given a choice of multiple spatial indexes, the SQL Server query optimizer is not able to choose (for the most part, IO etc. aside), which one is best. Also, there is usually only one choice of spatial query plan shape, in general. If
    your query is more complex than the one in your example, you might benefit by breaking it in two: one query to filter out all the rows and predicates that don't use a spatial index and one query that uses the spatial index on the subset. I've had good
    luck with this other situations with complex queries involving spatial predicates. This method may not be applicable to a spatial query as simple as the one in your example, however.
    Hope this helps, Bob 

  • What index is suitable for a table with no unique columns and no primary key

    alpha
    beta 
    gamma
    col1
    col2
    col3
    100
    1
    -1
    a
    b
    c
    100
    1
    -2
    d
    e
    f
    101
    1
    -2
    t
    t
    y
    102
    2
    1
    j
    k
    l
    Sample data above  and below is the dataype for each one of them
    alpha datatype- string 
    beta datatype-integer
    gamma datatype-integer
    col1,col2,col3 are all string datatypes. 
    Note:columns are not unique and we would be using alpha,beta,gamma to uniquely identify a record .Now as you see my sample data this is in a table which doesnt have index .I would like to have a index created covering these columns (alpha,beta,gamma) .I
    beleive that creating clustered index having covering columns will be better.
    What would you recommend the index type should be here in this case.Say data volume is 1 milion records and we always use the alpha,beta,gamma columns when we filiter or query records 
    what index is suitable for a table with no unique columns and primary key?
    col1
    col2
    col3
    Mudassar

    Many thanks for your explanation .
    When I tried querying using the below query on my heap table the sql server suggested to create NON CLUSTERED INDEX INCLUDING columns    ,[beta],[gamma] ,[col1] 
     ,[col2]     ,[col3]
    SELECT [alpha]
          ,[beta]
          ,[gamma]
          ,[col1]
          ,[col2]
          ,[col3]
      FROM [TEST].[dbo].[Test]
    where   [alpha]='10100'
    My question is why it didn't suggest Clustered INDEX and chose NON clustered index ?
    Mudassar

  • Import the table with 0 rows

    Hi
    I have a problem to import a dump that contains the tables with 0 rows.
    When i exported from ORACLE 11.2 64 bit on SERVER 2008 i noticed that log didn't confirm the tables with 0 rows.
    When i want to import to ORACLE 11.2 64 bit on other SERVER 2008 i have a lot of errors on this tables with 0 rows.
    In the log i get the same tables with 1 row at least, but no one with 0 rows.
    I open my dump in TEXTPAD and i see it contains "CREATE ....." these tables.
    I don't understand why it happens. I used FUll DUMP by SYS, it didn't help.
    This is not first time when i export and import dumps,no errors.
    I'm using command "EXP" and "IMP" and every time it's ok.(IF it's a releavent)
    Why it happens? any solutions for this issue?
    Thanks

    I've found (i guess so) solution to this issue
    here are two links to this new feature that is called deffered segment creation
    The reason for this behavior is 11.2 new feature ‘Deferred Segment Creation‘ – the creation of a table sent is deferred until the first row is inserted.
    As a result, empty tables are not listed in dba_segments and are not exported by exp utility
    http://www.nativeread.com/2010/04/09/11gr2-empty-tables-skipped-by-export-deferred-segment-creation/
    http://antognini.ch/2010/10/deferred-segment-creation-as-of-11-2-0-2/
    And this is i've found in official documentation from oracle
    Beginning in Oracle Database 11g Release 2, when creating a non-partitioned heap-organized table in a locally managed tablespace, table segment creation is deferred until the first row is inserted. In addition, creation of segments is deferred for any LOB columns of the table, any indexes created implicitly as part of table creation, and any indexes subsequently explicitly created on the table.The advantages of this space allocation method are the following:A significant amount of disk space can be saved for applications that create hundreds or thousands of tables upon installation, many of which might never be populated.Application installation time is reduced.There is a small performance penalty when the first row is inserted, because the new segment must be created at that time.
    To enable deferred segment creation, compatibility must be set to '11.2.0' or higher. You can disable deferred segment creation by setting the initialization parameter DEFERRED_SEGMENT_CREATION to FALSE. The new clauses SEGMENT CREATION DEFERRED and SEGMENT CREATION IMMEDIATE are available for the CREATE TABLE statement. These clauses override the setting of the DEFERRED_SEGMENT_CREATION initialization parameter.
    +Note that when you create a table with deferred segment creation (the default), the new table appears in the _TABLES views, but no entry for it appears in the SEGMENTS views until you insert the first row. There is a new SEGMENTCREATED column in _TABLES, _INDEXES, and _LOBS that can be used to verify deferred segment creation+
    Note:
    The original Export utility does not export any table that was created with deferred segment creation and has not had a segment created for it. The most common way for a segment to be created is to store a row into the table, though other operations such as ALTER TABLE ALLOCATE EXTENTS will also create a segment. If a segment does exist for the table and the table is exported, the SEGMENT CREATION DEFERRED clause will not be included in the CREATE TABLE statement that is executed by the original Import utility.

  • Filtering a table with 19699 rows

    Hello
    I am new to Xcelsisu and I am trying to aply filter component to a table with 19699 rows. I is suppose to show in the filter 45 diferent clasifications. I seems to be using only the first 500 rows to aply the filter. I changed the maximum number of rows in the preferences. What should i do ?

    a component cannot handle 20.000 rows you should filter these in the database by using quries when getting the data in the dashboard. If there is no database connection available, you can use pivot tables to create list for the different dimensions and then use lookup formulas like match and index o display the data for your chart.

  • Editable table with multiple rows

    Hi All!
    We're trying to develop some application in VC 7.0. That application should read data from some R/3 tables (via standard and custom functional modules), then display data to user and allow him/her to modify it (or add some data into table rows), and then save it back to R/3.
    What is the best way to do that?
    There's no problem with displaying data.
    But when I try to add something to table (on portal interface), I'm able to use only first row of the table... Even if I fill all fields of that row, I'm not able to add data to second row, etc.
    Second question. Is it possible to display in one table contents of output of several BAPIs? For example we have three bapis, one displaying user, second displays that user's subordinates, and the third one - that user's manager. And we want one resulting table...
    And last. What is the best way to submit data from table view in VC (portal) to R/3 table? I understand that we should write some functional module, that puts data to R/3. I'm asking about what should be done in VC itself. Some button, or action...
    Any help will be appreciated and rewarded :o)
    Regards,
    DK

    Here are some former postings:
    Editable table with multiple rows
    and
    Editable table with multiple rows
    Are you on the right SP-level?
    Can you also split up your posting: one question, one posting? This way you get faster answers, as people might just browse the headers.

  • How to know the the list of tables with no rows inside a schema?

    with reference to
    http://www.ss64.com/orad/USER_TABLES.html
    How to know the the list of tables with no rows inside a schema?
    I try this select table_name from user_tables where num_rows=0;
    I can found one table that is empty.
    So what's the query to return list of tables in a schema which has no rows?
    thanks

    You can do that only if your have collected the stats properly. Otherwise its going to show you wrong information.
    Check this out...
    SQL> drop table t
      2  /
    Table dropped.
    SQL> create table t
      2  as
      3  select level no from dual connect by level <=100
      4  /
    Table created.
    SQL> select table_name,num_rows from user_tables where table_name = 'T'
      2  /
    TABLE_NAME                       NUM_ROWS
    T
    SQL> begin
      2      dbms_stats.gather_table_stats(user,'T');
      3  end;
      4  /
    PL/SQL procedure successfully completed.
    SQL> select table_name,num_rows from user_tables where table_name = 'T'
      2  /
    TABLE_NAME                       NUM_ROWS
    T                                     100
    SQL>Thanks,
    Karthick.

  • TO DRAW A TABLE WITH MULTIPLE ROWS AND MULTIPLE COLOUMNS IN FORM

    Hi,
       How to draw a table with multiple rows and columns seperated by lines in form printing?

    check this
    http://sap-img.com/ts003.htm
    Regards
    Prabhu

  • How to insert a table with variable rows in smart form

    Hi all,
    How to insert a table with variable rows in smart form?
    Any help would be appreciated.
    Regards,
    Mahesh.

    Hi,
    Right click the mouse->create->table
    If you want 5 columns, you need to declare 5 cells in one line type of the table
    Click on Table -> Details, then do the following
    Line Type 1 2 3 4 5
    L1 2mm 3mm etc
    Here specify the width of the columns as many as you want..
    then in the header/main area of the table, click create Table Line, Rowtype is L1, automatically 5 cells will come,In each cell create a text element, display the variable to be printed there.

  • How to create table with resizable row ?

    how to create table with resizable row ?

    I'd suggest you start here:
    http://java.sun.com/docs/books/tutorial/uiswing/components/table.html

  • Conflict resolution for a table with LOB column ...

    Hi,
    I was hoping for some guidance or advice on how to handle conflict resolution for a table with a LOB column.
    Basically, I had intended to handle the conflict resolution using the MAXIMUM prebuilt update conflict handler. I also store
    the 'update' transaction time in the same table and was planning to use this as the resolution column to resolve the conflict.
    I see however these prebuilt conflict handlers do not support LOB columns. I assume therefore I need to code a customer handler
    to do this for me. I'm not sure exactly what my custom handler needs to do though! Any guidance or links to similar examples would
    be very much appreciated.

    Hi,
    I have been unable to make any progress on this issue. I have made use of prebuilt update handlers with no problems
    before but I just don't know how to resolve these conflicts for LOB columns using custom handlers. I have some questions
    which I hope make sense and are relevant:
    1.Does an apply process detect update conflicts on LOB columns?
    2.If I need to create a custom update/error handler to resolve this, should I create a prebuilt update handler for non-LOB columns
    in the table and then a separate one for the LOB columns OR is it best just to code a single custom handler for ALL columns?
    3.In my custom handler, I assume I will need to use the resolution column to decide whether or not to resolve the conflict in favour of the LCR
    but how do I compare the new value in the LCR with that in the destination database? I mean how do I access the current value in the destination
    database from the custom handler?
    4.Finally, if I need to resolve in favour of the LCR, do I need to call something specific for LOB related columns compared to non-LOB columns?
    Any help with these would be very much appreciated or even if someone can direct me to documentation or other links that would be good too.
    Thanks again.

  • How to constuct a table with each row having a button to select

    Hi,
    Imagine i have 5 orders and i want to select one order to view or update in other page.
    How i construct a table with each row having a button to select it.
    I'm having a hard time to do this. How i select a backingBean method in Javascript?.
    Thanks

    I'm trying to put a <h:commandButton> inside <h:dataTable> like this:
                    <h:dataTable id="books" value="#{BuscarLibros.booksSearched}" var="book">
                        <h:column>
                            <f:facet name="header">
                                <h:outputText value="#{messages.title}"/>
                            </f:facet>
                        </h:column>
                        <h:column>
                            <f:facet name="header">
                                <h:outputText value="#{messages.isbn}"/>
                            </f:facet>
                            <h:outputText value="#{book.isdn}"/>
                        </h:column>
                        <h:column>
                            <h:outputFormat value="#{book.fechaPublicacion}">
                                <f:convertDateTime pattern="ddd/MM/yyyy"/>
                            </h:outputFormat>
                            <h:commandButton action="#{BuscarLibros.prepareUpdateBook}" value="Hello WORLD"/>
                        </h:column>
                    </h:dataTable>but it does not call the method.
    If it is outside it calls but inside not. Its strange because Duke's bookstore example from Sun Java EE is like that.
    Any clues?

Maybe you are looking for