Synchronize Local WF Tables is reverting partitions to NOLOGGING

On ATG.H RUP 4 and 5, there is a bug where the daily Synchronize WF Local Tables concurrent request set is reverting partitions of WF_LOCAL_ROLES to NOLOGGING. If you attempt to restore a database or clone when this request set has run between your hot backup time and your requested restore time, you will not be able to log in due to:
ORA-01578: Oracle data block corrupted (file #66, block #106574)
ORA-26040: Data block was loaded using the NOLOGGING option
ORA-06512: at "APPS.WF_DIRECTORY", line 649
The block corresponds to applsys.wf_local_roles.
Note 433280.1 indicated this is fixed in ATG.H RUP6.
Partitions in Workflow Local Tables are Automatically Switched to NOLOGGING
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=433280.1
RUNNING SYNCHRONIZE WF LOCAL TABLES CHANGES PARTITION TO NOLOGGING.
https://metalink.oracle.com/metalink/plsql/showdoc?db=Bug&id=5942254
During a Backup, WF_LOCAL_ROLES Is Showing Corrupt Block
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=369535.1      
I hacked around it by truncating wf_local_roles in the target, then exported it from the source, imported in the target instance and ran the Synchronize WF Local Tables request set:
Unable To Login After Clone
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=418130.1
We altered all the wf_local_roles partitions to logging yesterday in the source instance, and one reverted back to nologging today already. Pretty annoying...

thanks michael for sharing that with us
fadi

Similar Messages

  • Cl_gui_alv_grid:exporting protected table data MT_OUTTAB to local int.table

    Hi guys,
    this is my problem.
    I've an instance of cl_gui_alv_grid in the reference variable gos_alv. This instance
    is populated by "sap GOS service". 
    Now, i'd like to save locally the internal data table of the gos_alv (populated
    automatically by the service call).
    The data tables is called "mt_outtab" and is a standard protected component of the cl_gui_alv_grid class
    (you can see it in the se24 transaction).
    I think to create a subclass of the cl_gui_lav_grid and make a public method to
    save data like this :
    field-symbols: <outtab> type standard table.
    *       CLASS lcl_gui_alv_grid DEFINITION
    class lcl_gui_alv_grid definition inheriting from cl_gui_alv_grid.
      public section.
        methods : get_tab_line.
    endclass.                    "lcl_gui_alv_grid DEFINITION
    *       CLASS lcl_gui_alv_grid IMPLEMENTATION
    class lcl_gui_alv_grid implementation.
      method get_tab_line.
    * mt_outtab is the data table held as a protected attribute
    * in class cl_gui_alv_grid.
        assign me->mt_outtab->* to <outtab>. "Original data
      endmethod.                    "get_tab_line
    endclass.                    "lcl_gui_alv_grid IMPLEMENTATION
    data : l_alv type ref to lcl_gui_alv_grid.
    But when i do a downcast like this :
    l_alv ?= gos_alv.
    I have an exception of casting error. SO, how i can extract protected data into
    local internal table?
    I've already try to debug the service that build the alv list to understand how the data table
    is populated but it's too complex.
    Thank you
    Andrea

    Hi,
    not yet any suggestion?
    Generally speaking, i cannot understand why i cannot use down-cast with an instance declared as a subclass of a super-class.
    Many examples, shows that i can create a sub-class inheriting from a super-class, add methods and attributes, make something like this : subclass ?= superclass and the call method of the subclass.
    But when i try to do the same with a class derived from cl_gui_alv_grid, i have always a casting type exception in the down-cast instruction.
    Could you explain me why?
    Thank you very much
    Andrea

  • Aggregate tables have many partitions per request

    We are having some performance issues dealing with aggregate tables and
    Db partitions. We are on BW3.5 Sp15 and use Oracle DB 9.2.06. After
    some analysis, we can see that for many of our aggregates, there are
    sometimes as much as a hundred partitions in the aggregates fact table.
    If we look at infocube itself, there are only a few requests (for
    example, 10). However, we do often delete and reload requests
    frequently. We understood that there should only be one partition per
    request in the aggregate (infocube is NOT set up for partitioning by
    other than request).
    We suspect the high number of partitions is causing come performance
    issues. But we don;t understand why they are being created.
    I have even tried deleting the aggregate (all aggregate F tables and
    partitions were dropped) and reloading, and we still see many many more
    partitions than requests. (we also notice that many of the partitions
    have a very low record count - many less than 10 records in partition).
    We'd like to understand what is causing this. Could line item
    dimensions or high cardinality play a role?
    On a related topic-
    We also have seen an awful lot of empty partitions in both the infocube
    fast table and the aggregate fact table. I understand this is probably
    caused by the frequent deletion and reload of requests, but I am
    surprised that the system does not do a better job of cleaning up these
    empty partitions automatically. (We are aware of program
    SAP_DROP_EMPTY_FPARTITIONS).
    I am including some files which show these issues via screen shots and
    partition displays to help illustrate the issue.
    Any help would be appreciated.
    Brad Daniels
    302-275-1980
    215-592-2219

    Ideally the aggregates should get compressed by themselves - there could be some change runs that have affected the compression.
    Check the following :
    1. See if compressing the cube and rolling up the aggregates will merge the partitions.
    2. What is the delta mode for the aggregates ( are you loading deltas for aggregates or full loads ) ?
    3. Aggregates are partitioned according to the infocube and since you are partitioning according to the requests - the same is being done on the aggregates.
    Select another partitioning characteristic if possible. - because it is ideally recommended that request should not be used for partitioning.
    Arun
    Assign points if it helps..

  • Modify HUGE HASH partition table to RANGE partition and HASH subpartition

    I have a table with 130,000,000 rows hash partitioned as below
    ----RANGE PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)(
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009),
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010),
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011),
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE)
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR, LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from HASH partition to RANGE partition and a sub-partition (HASH) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

    Sorry for the confusion in the first part where I had given a RANGE PARTITION instead of HASH partition. Pls read as follows;
    I have a table with 130,000,000 rows hash partitioned as below
    ----HASH PARTITION--
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY HASH (C_NBR)
    PARTITIONS 2
    STORE IN (PCRD_MBR_MR_02, PCRD_MBR_MR_01);
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Data: -
    INSERT INTO TEST_PART
    VALUES ('2000',200001,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200009,'CM');
    INSERT INTO TEST_PART
    VALUES ('2000',200010,'CM');
    VALUES ('2006',NULL,'CM');
    COMMIT;
    Now, I need to keep this table from growing by deleting records that fall b/w a specific range of YRMO_NBR. I think it will be easy if I create a range partition on YRMO_NBR field and then create the current hash partition as a sub-partition.
    How do I change the current partition of the table from hash partition to range partition and a sub-partition (hash) without losing the data and existing indexes?
    The table after restructuring should look like the one below
    COMPOSIT PARTITION-- RANGE PARTITION & HASH SUBPARTITION --
    CREATE TABLE TEST_PART(
    C_NBR CHAR(12),
    YRMO_NBR NUMBER(6),
    LINE_ID CHAR(2))
    PARTITION BY RANGE (YRMO_NBR)
    SUBPARTITION BY HASH (C_NBR) (
    PARTITION TEST_PART_200009 VALUES LESS THAN(200009) SUBPARTITIONS 2,
    PARTITION TEST_PART_200010 VALUES LESS THAN(200010) SUBPARTITIONS 2,
    PARTITION TEST_PART_200011 VALUES LESS THAN(200011) SUBPARTITIONS 2,
    PARTITION TEST_PART_MAX VALUES LESS THAN(MAXVALUE) SUBPARTITIONS 2
    CREATE INDEX TEST_PART_IX_001 ON TEST_PART(C_NBR,LINE_ID);
    Pls advice
    Thanks in advance

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • Regarding 'FREE' ABAP Keyword in the local internal table

    Hi Experts,
    //sorry for my english ;(
    Our ABAP Development leader forced us to use  FREE ABAP Keyword on Local internal table like below situation.
    and I really don't get it.
    -below-
    FORM GET_DATA.
      DATA: LT_ITAB TYPE TABLE OF SFLIGHT.
      SELECT * FROM ZTABLE INTO TABLE LT_TABLE
            FOR ALL ENTRIES IN LT_ITAB
        WHERE KKEY = LT_ITAB-CARRID.
    "// WHY DO I HAVE TO USE THIS CODE
        FREE LT_ITAB.
    ENDFORM.
    I know that GC(garbage collector) will release memory area of LT_ITAB.
    but why do i have to release directly FREE LT_ITAB memory area?
    cause to take in short time memory advantage of before GC calling?
    thanks.

    Guys, why don't you read ABAP help?
    From ABAP help about FREE:
    For internal tables, FREE has the same effect as the REFRESH statement, though the entire memory area occupied by the table rows is released, and the initial memory area remains unoccupied
    About REFRESH:
    This statement sets an internal table itab to its initial value, meaning that it deletes all rows of the internal table. The memory space required for the table is freed up to the initial memory size INITIAL SIZE. For itab, you must specify an internal table.
    To delete all rows and free the entire memory space occupied by rows, you can use the statement FREE.
    About INITAIL SIZE:
    After the optional addition INITIAL SIZE, you can specify a number of rows n as a numeric literal or numeric constant to adjust the size of the first block in the memory that is reserved by the system for an internal table of the table type. Without this addition, if the number 0 is entered, or the value of n exceeds a maximum value, the system automatically allocates an appropriate memory area.
    To summarize:
    Using free allows you to immediately free the initial space reserved by the kernel for an internal table.
    I am lazy to look for it, but I don't think it ever exceeds few Kb.
    regards,
      Yuri

  • Is it possible a partition table create a partition itself?

    Hi,
    I migrated a table to range partition table by years on production system.
    But I thought, after the new year 2011 must I add new partition again for year 2011?
    For example,
    When a new record comes for year 2011 and if there is no partition for 2011 the table should be create new partition for 2011 itself?
    Every year must I add new partition myself? this is a time consuming job.
    Yes. I know MAXVALUE but I don't want to use it. I want to be done automatically.
    regards,

    Hi,
    I haven't tried EXECUTE_IMMEDIATE. It doesn't matter because I haven't found how the partition name concatenate a variable automatically.
    DB version is 10.2.0.1.
    table script is:
    CREATE TABLE INVOICE_PART1
    ID NUMBER(10) NOT NULL,
    PREPARED DATE NOT NULL,
    TOTAL NUMBER,
    FINAL VARCHAR2(1 BYTE),
    NOTE VARCHAR2(240 BYTE),
    CREATED DATE NOT NULL,
    CREATOR VARCHAR2(8 BYTE) NOT NULL,
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    LOGGING
    PARTITION BY RANGE (CREATED)
    PARTITION INV08 VALUES LESS THAN (TO_DATE(' 2010-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV09 VALUES LESS THAN (TO_DATE(' 2010-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV10 VALUES LESS THAN (TO_DATE(' 2010-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    PARTITION INV VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE PROD
    PCTUSED 40
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING
    ENABLE ROW MOVEMENT;

  • How to make a table to be partitioned automatically in 10.1.0.2.0 ?

    Hi
    I am using Oracle database 10.1.0.2.0.
    I created a table. I have done partition on Date column.
    Initially I created partitions for 2007 year.
    Now I should create partitions for 2008 year. So every year, I have to add partitions manually.
    Could any one help me, how can I automate this process.?
    Thank you,
    Regards,
    Gowtham sen.

    I don't know.. how much it would effect.
    In my case, I am having around 100 tables which are partitioned using date column.
    I am loading a table with 5 crore records on an average for its load frequence.
    If I use trigger, it would verify..whether the value in the coumn is new value or not.
    So I am thinking.. it would effect.
    Thank you,
    Regards,
    Gowtham Sen.

  • Create local temp table

    I need to create a temp local table and look for ColdFusion informaiton, the cffile action write only can write text file, pictures is more to create file.
    I would like to know does ColdFusion support to create local temp tables on session start,
    If yes, should be able to get client temp directory and access the data using temp directory without using data source from ColdFusion server?
    Your help and information is great appreciated,
    Regards,
    Iccsi,

    Thanks for the information and help,
    I use jQuery combox to let user type selection from drop down box, but the table has more than 10,000 records which has performance issue. I would like to load to client machine to let user access locally to resolve performance issue using jQuery combo box,
    Thanks again for helping,
    Regards,
    Iccsi,

  • Alter range partition table to Interval partitioning table.

    Hi DBA's,
    I have a very big range partitioned table.
    Recently we have upgraded our database to 11gR2 which has a feautre called interval partitioning.
    Now i want to modify that existing range partitioned table to Interval Partitioning.
    can we alter the range partitioned table to interval partitioning table?
    I googled for the syntax but i didn't find it, can any one help[ me out on this?
    Thanks.

    If you ignore the "alter session set NLS_CALENDAR=PERSIAN;" during create/alter, everything else seems to work.
    When you set the "alter session..." during inserts, the rows gets inserted into the correct partitions.
    Only thing is when you look at HIGH_VALUE, you need to convert from the default GREGORIAN to PERSIAN.

  • Table Compression on Partitions

    Hi,
    Can any help how to implement Table compression on Partitions?
    Thanks in advance

    Here is two examples for you.
    Example 1. This table has two partitions. It has compression at the table level, one partition is compressed and one is not compressed
    SQL>create table test_compress1
    (t_id number(10),
    tname varchar2(30)) partition by range (t_id)
    (partition p0 values less than (50) compress
    ,partition p1 values less than (100) nocompress)
    compress ;
    Example 2. This table has two partitions. It has no compression at the table level, both partitions are compressed
    SQL>create table test_compress2
    (t_id number(10),
    tname varchar2(30)) partition by range (t_id)
    (partition p0 values less than (50) compress
    ,partition p1 values less than (100) compress);
    You can play with different options but you must ensure you read more about the limitations in your SQL Reference manual before using Compression for both table or partition.

  • Postifx user unknown in local recipient table

    Good morning --
    My fetchmail job has been failing to get mail to my mailbox with this error (presumably from Postfix):
    SMTP error: 450 4.1.1 <username@localhost>: Recipient address rejected: User unknown in local recipient table
    (I replaced the actual user name with "username")
    I'm not sure what to make of this. "username" definitely exists -- I just su'd into his account and ran the fetchmail job that gave me the error.
    The problem goes away if I stop and restart postfix, but it seems to come back pretty consistently (I haven't had a chance to figure out the precise timing).
    Any suggesitons?

    Thanks, Mihalis.
    I do not us su -l -- just plain old su. And echo $USER returns the correct user (that is, the one I su'd into).
    I don't think the problem is fetchmail. It's the same result whether I run it from the prompt ("fetchmail -v") or from the user's cron.
    The problem resolves temporarily if I restart postfix, but it returns within a few cycles (the cron job runs every three minutes.
    The error message repeats itself for each mail item that fetchmail parses. Here's the last bit of a fetchmail's results:
    fetchmail: SMTP> RSET
    fetchmail: SMTP< 250 2.0.0 Ok
    fetchmail: not flushed
    fetchmail: POP3> LIST 12
    fetchmail: POP3< +OK 12 4337
    fetchmail: POP3> RETR 12
    fetchmail: POP3< +OK 4337 octets follow.
    fetchmail: reading message [email protected]@mail.XX.com:12 of 12 (4337 octets)
    fetchmail: SMTP> MAIL FROM:<[email protected]> SIZE=4337
    fetchmail: SMTP< 250 2.1.0 Ok
    fetchmail: SMTP> RCPT TO:<XX@localhost>
    fetchmail: SMTP< 450 4.1.1 <XX@localhost>: Recipient address rejected: User unknown in local recipient table
    fetchmail: SMTP error: 450 4.1.1 <XX@localhost>: Recipient address rejected: User unknown in local recipient table
    fetchmail: SMTP> RSET
    fetchmail: SMTP< 250 2.0.0 Ok
    ...fetchmail: not flushed
    fetchmail: POP3> QUIT
    fetchmail: POP3< +OK Bye-bye.
    fetchmail: SMTP> QUIT
    fetchmail: SMTP< 221 2.0.0 Bye
    fetchmail: 6.3.8 querying mail.XX.com (protocol POP3) at Mon, 21 Jan 2008 18:58:12 -0500 (EST): poll completed
    fetchmail: normal termination, status 0

  • Fetchmail, Postfix user unknown in local recipient table

    Hello all --
    My fetchmail job has been failing to get mail to my mailbox with this error:
    SMTP error: 450 4.1.1 <username@localhost>: Recipient address rejected: User unknown in local recipient table
    (I replace my actual user name with "username")
    The problem goes away temporarily if I stop and restart postfix, but it comes back almost immediately.
    I'm having a hard time finding any clues in postfix's log. I'm not too sure what to look for, and it's pretty voluminous!
    Any suggestions?

    AlanNYC wrote:
    If I turn off local recipient checking, will I actually get my mail?
    Yes, all email properly addressed should be delivered to you without problems.
    The line only affects improperly addressed email, in this case allowing them to be accepted instead of rejected.
    Since you are running Spamassassin and an IMAP server, I suggest also using the line
    luser=[email protected]
    which will send all improperly addressed mail to the address specified by "[email protected]". This is what I meant by "catch-all" address.
    If you find postfix giving you problems after adding the lines, simply delete them or comment them out by adding a hash mark to the front of the line, e.g.
    #localrecipientmaps =
    Alternatively, you can simply make no changes and allow the log messages to accumulate. The messages mean that postfix is doing its job by rejecting email addressed to users that don't exist. The above steps allow you to receive mail addressed to [email protected], where "anything" is any string allowed in an email address.
    I assume you're testing your changes using a separate email account, but in case you're not: sign up for a free email account with any of a number of free email services (Gmail, Yahoo) and test your postfix install as you make changes using the free account.

  • Local variable table missing

    I have compiled my sources using IBM's jikes compiler with the -g option (for debugging purposes). I am using Sun's JRE 1.2.2. and JPDA 1.0.
    When I run the debugger using JPDA and try to set a watch for local variables I get an error saying that the local variable table is missing. And it asks me to compile the source with the debug option (which I've already done but with the jikes compiler)
    Any ideas what is going wrong here?
    Thanks,
    Rahul.

    Simplest/quick solution would be to recompile with sun's javac before you run it through sun's JRE.

  • Debugger issue - local variable table missing

    I have compiled my sources using IBM's jikes compiler with the -g option (for debugging purposes). I am using Sun's JRE 1.2.2. and JPDA 1.0.
    When I run the debugger using JPDA and try to set a watch for local variables I get an error saying that the local variable table is missing. And it asks me to compile the source with the debug option (which I've already done but with the jikes compiler)
    Any ideas what is going wrong here?
    Thanks,
    Rahul.

    How about using javac?

Maybe you are looking for