Corrupt table with cachedwithin bug

I'm suddenly getting this error in different queries when
using cachedwithin().
java.lang.IllegalStateException
corrupt table
the query looks like this but there are others:
<cfquery datasource="#request.dsn#" name="metals"
cachedwithin=".125">
select m.mid, m.metal from merch m
where m.groupid in (select groupid from merch where mid =
#val(url.MID)#)
</cfquery>
I have flushed the cache from the admin without success.
I have tried reformating the query in the hopes that it would
purge the cache but the new cache does the same.
This started happening after I uninstalled coldfusion beta
and installed the public trial last night.
Any ideas?

It started doing it again 4 hours after restart and light
load. corrupt table null
here is the stacktrace:
java.lang.IllegalStateException: corrupt table at
coldfusion.util.LruCache.reap(LruCache.java:214) at
coldfusion.util.LruCache.get(LruCache.java:190) at
coldfusion.sql.Executive.getCachedQuery(Executive.java:1262) at
coldfusion.tagext.sql.QueryTag.setupCachedQuery(QueryTag.java:708)
at coldfusion.tagext.sql.QueryTag.doEndTag(QueryTag.java:517) at
cfsub_cat2ecfm302901843.runPage(C:\inetpub\ultradiamonds.com\sub_cat.cfm:101)
at coldfusion.runtime.CfJspPage.invoke(CfJspPage.java:192) at
coldfusion.tagext.lang.IncludeTag.doStartTag(IncludeTag.java:366)
at
coldfusion.filter.CfincludeFilter.invoke(CfincludeFilter.java:65)
at
coldfusion.filter.ApplicationFilter.invoke(ApplicationFilter.java:279)
at
coldfusion.filter.RequestMonitorFilter.invoke(RequestMonitorFilter.java:48)
at
coldfusion.filter.MonitoringFilter.invoke(MonitoringFilter.java:40)
at coldfusion.filter.PathFilter.invoke(PathFilter.java:86) at
coldfusion.filter.ExceptionFilter.invoke(ExceptionFilter.java:70)
at
coldfusion.filter.BrowserDebugFilter.invoke(BrowserDebugFilter.java:74)
at
coldfusion.filter.ClientScopePersistenceFilter.invoke(ClientScopePersistenceFilter.java:2 8)
at coldfusion.filter.BrowserFilter.invoke(BrowserFilter.java:38) at
coldfusion.filter.NoCacheFilter.invoke(NoCacheFilter.java:46) at
coldfusion.filter.GlobalsFilter.invoke(GlobalsFilter.java:38) at
coldfusion.filter.DatasourceFilter.invoke(DatasourceFilter.java:22)
at coldfusion.CfmServlet.service(CfmServlet.java:175) at
coldfusion.bootstrap.BootstrapServlet.service(BootstrapServlet.java:89)
at jrun.servlet.FilterChain.doFilter(FilterChain.java:86) at
coldfusion.monitor.event.MonitoringServletFilter.doFilter(MonitoringServletFilter.java:42 )
at
coldfusion.bootstrap.BootstrapFilter.doFilter(BootstrapFilter.java:46)
at jrun.servlet.FilterChain.doFilter(FilterChain.java:94) at
jrun.servlet.FilterChain.service(FilterChain.java:101) at
jrun.servlet.ServletInvoker.invoke(ServletInvoker.java:106) at
jrun.servlet.JRunInvokerChain.invokeNext(JRunInvokerChain.java:42)
at
jrun.servlet.JRunRequestDispatcher.invoke(JRunRequestDispatcher.java:284)
at
jrun.servlet.ServletEngineService.dispatch(ServletEngineService.java:543)
at
jrun.servlet.jrpp.JRunProxyService.invokeRunnable(JRunProxyService.java:203)
at
jrunx.scheduler.ThreadPool$DownstreamMetrics.invokeRunnable(ThreadPool.java:320)
at
jrunx.scheduler.ThreadPool$ThreadThrottle.invokeRunnable(ThreadPool.java:428)
at
jrunx.scheduler.ThreadPool$UpstreamMetrics.invokeRunnable(ThreadPool.java:266)
at jrunx.scheduler.WorkerThread.run(WorkerThread.java:66)
type: java.lang.IllegalStateException

Similar Messages

  • Ora-20001 when creating a form on table with report (bug?)

    Having some trouble creating a "Form on table with report".
    1) I pick my table
    2) take most of the defaults on the page where you pick the report type (interactive) and the page number (I changed it to 950). next->
    3) Do not use tabs. Next->
    4) Select all columns for the report. THEN (here's the problem) set an optional where clause of system_role_name like 'ODPSPOPUP%'. Next->
    5) choose standard edit link. next->
    6) Specify a page of 951 for the form (leave others defaults). next->
    7) Set the form primary key (defined in table). next->
    8) use existing trigger. next->
    9) choose all columns for the form. next->
    10) Leave actions to insert, update, delete. next->
    11) Get to the summary page and click Finish
    Then I get an error page saying:
    ORA-20001: Unable to create query and update page. ORA-20001: Unable to create query and update page. ORA-00933: SQL command not properly ended
    If I go back to step 4 and erase my where clause the wizard completes successfully.
    Also if I change the report type in step 2 from the default of "Interactive" to "Classic" the wizard completes successfully. However upon running the report I get a query parse error. Looks like the where clause in the report sql is: system_role_name like ''ODPSPOPUP%'' (two single quotes on each side).
    It looks as if you cannot specify a where clause with a quoted string. The wizard is expecting a bind variable.
    Workaround(s):
    1) Don't specify a where clause when report type = Interactive in "create form on table with report" wizard.
    or
    2) Specify a bogus where clause using bind variable syntax such as "system_role_name like :BOGUSVARIABLE". Then edit the report query once the wizard finishes and change the where clause to the constant string you wanted to use in the wizard (e.g. "system_role_name like 'MYSYSTEM%'")
    Apex: 3.2.0.00.27
    Database: Oracle Database 11g Enterprise Edition 11.1.0.7.0 64bit Production (Oracle EL5)

    Andy,
    It's a bug, all right. Thanks for the detailed problem description. We'll fix it when we can.
    Scott

  • BUG: Apex 4.0.1 form on table with report. Not passing ID correctly

    I'm trying to create a form on a table with report using the wizard. I opted for a classic report page.
    The edit link on the report page does not work. The link looks like this:
    http://ngdwpc3:7777/pls/apex/f?p=101:15:1800428764039812::::P15_ID:#ID#
    This is clearly wrong because one would expect a value to be passed and not a column name.
    Looking at the generated report page I get 60 columns. This seems to be a result of the setting:
    no-checked :Use Query-Specific Column Names and Validate Query
    checked :Use Generic Column Names (parse query at runtime only)
    The ID column in the table is mapped to COL01.
    Changing the value being passed in the link columns to #COL01# results in the correct behaviour.
    My guess is that the option "Use Query-Specific Column Names and Validate Query" is the correct setting for the generated report.
    Rene

    Yes, I guess this is a Bug.
    1. When you speciify a WHERE clause in the Wozard it adds extra quotes when appending the WHERE into the Report SELECT.
    I think thats the root cause, all others just follow as a consequence.
    a. Because the query cannot be parse ( two single quotes) the Use Generic Column Names (parse query at runtime only) is automtacillay set
    b. This leads to COL01 to COL60
    c. Which is turn leads to the link not working.
    I could easily recreate the scenario.
    To fix it I did the following
    1. Removed the extra quotes from the query
    2. Selected Use Query-Specific Column Names and Validate Query and Saved
    It started working correct.
    I suggest you edit your Thred name add BUG at the begining so that it gets spotted by Oracle folks on the forum.
    Regards

  • Table QBE-Filter BUG in combination with Application Module Pooling ?

    Hi,
    i use JDEVADF_11.1.1.1.0_GENERIC_090615.0017.5407, Java 1.6.0_14, ADF BC and ADF Faces.
    I have one View Object, one Page with a panelCollection and a table with option filtering (create via drag-and-drop from datacontrol).
    When I disable the application module pooling on the AM configuration and run the application, i can execute the query-by-example filter, but when I delete the filterCriteria and press enter (requerry for select all) the table shows the old data.
    Image for setting on application module:
    http://img265.imageshack.us/img265/1374/filterdoesnotwork1.png
    Image for execute querry and requerry for "all rows" (deleted filterCriteria)
    http://img140.imageshack.us/img140/1963/filterdoesnotwork2.png
    When I enable application module pooling all works fine. Is this a issue?
    kind regards

    Hi Frank,
    if I filter for employees which firstname starts with "D" and then for employees with "F" --> i see "No data to display.".
    So uncheck "Enable Application Module Pooling" on application module doesn't work with QBE-Filter ?
    I can send a testcase but it is so simple you can create it with jdeveloper in 2 minutes ;)
    Martin

  • Funny bug in SQLDev 2.1EA - tables with same name...

    Hi,
    i opened two session in SQLDev 2.1EA, on two distinct instances.
    These sessions are open against the same user and i am clicking on a table with the same name.
    This happens only when the "pin" icon is not set.
    Apparently SQLDev doesn't not distinguish between the two connections when it comes to display the object attributes like columns, data, indexes and so on.
    So, practically, if i click on object 1 (instance 1), click on data tab, then i click on object 1 (instance 2), i still see the data for instance 1, no matter if i ask to refresh.
    This doesn't happen if i open each object in a separate tab.
    This happens on SQLDev on Mac OS X.
    Flavio
    http://oraclequirks.blogspot.com

    Sorry to drop in, but I suppose many have this setup: the same users/connections on both development and production databases.
    However I'm not able to reproduce now on my 10g DBs, I did see the same problem a couple of versions ago (when I was on 9i), but AFAIK that got fixed. Maybe regression anyway?
    Regards,
    K.

  • SSIS 2012 is intermittently failing with below "Invalid date format" while importing data from a source table into a Destination table with same exact schema.

    We migrated Packages from SSIS 2008 to 2012. The Package is working fine in all the environments except in one of our environment.
    SSIS 2012 is intermittently failing with below error while importing data from a source table into a Destination table with same exact schema.
    Error: 2014-01-28 15:52:05.19
       Code: 0x80004005
       Source: xxxxxxxx SSIS.Pipeline
       Description: Unspecified error
    End Error
    Error: 2014-01-28 15:52:05.19
       Code: 0xC0202009
       Source: Process xxxxxx Load TableName [48]
       Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 11.0"  Hresult: 0x80004005  Description: "Invalid date format".
    End Error
    Error: 2014-01-28 15:52:05.19
       Code: 0xC020901C
       Source: Process xxxxxxxx Load TableName [48]
       Description: There was an error with Load TableName.Inputs[OLE DB Destination Input].Columns[Updated] on Load TableName.Inputs[OLE DB Destination Input]. The column status returned was: "Conversion failed because the data value overflowed
    the specified type.".
    End Error
    But when we reorder the column in "Updated" in Destination table, the package is importing data successfully.
    This looks like bug to me, Any suggestion?

    Hi Mohideen,
    Based on my research, the issue might be related to one of the following factors:
    Memory pressure. Check there is a memory challenge when the issue occurs. In addition, if the package runs in 32-bit runtime on the specific server, use the 64-bit runtime instead.
    A known issue with SQL Native Client. As a workaround, use .NET data provider instead of SNAC.
    Hope this helps.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • Table with more than 35 columns

    Hello All.
    How can one work with a table with more than 35 columns
    on JDev 9.0.3.3?
    My other question is related to this.
    Setting Entities's Beans properties from a Session Bean
    bought up the error, but when setting from inside the EJB,
    the bug stays clear.
    Is this right?
    Thank you

    Thank you all for reply.
    Here's my problem:
    I have an AS400/DB2 Database, a huge and an old one.
    There is many COBOL Programs used to communicate with this DB.
    My project is to transfer the database with the same structure and the same contents to a Linux/ORACLE System.
    I will not remake the COBOL Programs. I will use the existing one on the Linux System.
    So the tables of the new DB should be the same as the old one.
    That’s why I can not make a relational DB. I have to make an exact migration.
    Unfortunately I have some tables with more than 5000 COLUMNS.
    Now my question is:
    can I modify the parameters of the ORACE DB to make it accept Tables and Views with more than 1000 columns, If not, is it possible to make a PL/SQL Function that simulate a table, this function will insert/update/select data from many other small tables (<1000 columns). I want to say a method that make the ORACLE DB acting like if it has a table with a huge number of columns;
    I know it's crazy but any idea please.

  • ORA-00942 error on truncating a table with a XML Index

    Oracle Version: 11.2.0.1.0
    When truncate command fails with error "ORA-00942: table or view does not exist" when run against a table with an XML Index defined
    SQL> CREATE TABLE XML_TEST
    2 (
    3 ID INTEGER,
    4 TESTXML SYS.XMLTYPE
    5 );
    Table created.
    SQL> truncate table XML_TEST;
    Table truncated.
    SQL> CREATE INDEX xmlindex ON XML_TEST(TESTXML)
    2 indextype IS xdb.xmlindex
    3 parameters ('PATH TABLE MY_PATH_TABLE');
    Index created.
    SQL> truncate table XML_TEST;
    truncate table XML_TEST
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> Drop Index xmlindex;
    Index dropped.
    SQL> truncate table XML_TEST;
    Table truncated.

    No, I don't think that explanation is correct. I don't think it has to do with user privs. besides, we don't
    adjust rowids on an import -- we recreate the index, just like a b-tree index import would.
    This should be working. It's most likely a bug in our (i.e. Text) import code -- SYS.XMLTYPE is a little
    strange because under the covers it's actually a function-based index.
    I will test it out and file a bug if I can reproduce the behavior on solaris.

  • Problem with update of BLOB field in a table with compound primary key

    Hi,
    I've been developing an application in Application Express 3.1.2.00.02 that includes processing of BLOB data in one of the tables (ZPRAVA). Unfortunately, I've come across a strange behaviour when I tried to update value in a BLOB field for an existing record via a DML form process. Insert of a new record including the BLOB value is OK (the binary file uploads upon submiting the form without any problems). I haven't changed the DML process in any way. The form update process used to work perfectly before I'd included the BLOB field. Since than, I keep on getting this error when trying to update the BLOB field:
    ORA-20505: Error in DML: p_rowid=3, p_alt_rowid=ID, p_rowid2=CZ000001, p_alt_rowid2=PR_ID. ORA-01008: not all variables bound
    Unable to process row of table ZPRAVA.
    OK
    Some time ago, I've already created another application where I used similar form that operated on a BLOB field without problems. The only, but maybe very important, difference between both the cases is that the first sucessfull one is based on a table with a standard one-column primary key whereas the second (problematic one) uses a table with compound (composite) two-column PK (two varchar2 fields: ID, PR_ID).
    In both cases, I've followed this tutorial: [http://www.oracle.com/technology/obe/apex/apex31nf/apex31blob.htm]).
    Can anybody confirm my suspicion that Automatic Row Processing (DML) can be used for updating BLOB fields within tables with only single-column primary keys?
    Thanks in advance.
    Zdenek

    Is there a chance that the bug will be included in the next patch?No, this fix will be in the next full version, 3.2.
    Scott

  • MS Access Web App: corrupted table, cannot open in Access anymore

    **tldr: How can I delete a corrupted table that prevents me from opening my Web App in Access?**
    I used the Access desktop client to create a new table with approx 20 lookup fields. When I tried saving the table, I received an error message about too many indices. So I set the index option to "no" for all the lookup fields and tried to close
    the "edit table view". However, I was not able to close the edit table view anymore. After trying for a while, I used the task manager to terminate Access.
    Now, when I try to open my app in Access by clicking the "customize in Access" button on the web, I receive several error messages:
     1. Operation failed: the table xxx contains too many indices. Delete some indices and try again. (this error message appears about 5 times)
     2. Microsoft Access can not create the table
     3. A problem occurred when trying to access a property or method of the OLE object.
    Next, I'm at the Access start screen. My application does not open.
    So, is there any other way I can delete the corrupted table without opening it through the Access client? Maybe directly accessing the SQL server? The database is configured to allow read/write connections, becasue I connected to the tables from an Access Desktop
    Database, but I'm not sure if I can delete a table or fields that way. Any help is greatly appreciated!
    [I translated the error messages from German, so they might be slightly different in the English version]

    **tldr: How can I delete a corrupted table that prevents me from opening my Web App in Access?**
    I used the Access desktop client to create a new table with approx 20 lookup fields. When I tried saving the table, I received an error message about too many indices. So I set the index option to "no" for all the lookup fields and tried to close
    the "edit table view". However, I was not able to close the edit table view anymore. After trying for a while, I used the task manager to terminate Access.
    Now, when I try to open my app in Access by clicking the "customize in Access" button on the web, I receive several error messages:
     1. Operation failed: the table xxx contains too many indices. Delete some indices and try again. (this error message appears about 5 times)
     2. Microsoft Access can not create the table
     3. A problem occurred when trying to access a property or method of the OLE object.
    Next, I'm at the Access start screen. My application does not open.
    So, is there any other way I can delete the corrupted table without opening it through the Access client? Maybe directly accessing the SQL server? The database is configured to allow read/write connections, becasue I connected to the tables from an Access Desktop
    Database, but I'm not sure if I can delete a table or fields that way. Any help is greatly appreciated!
    [I translated the error messages from German, so they might be slightly different in the English version]
    Not sure, but you may have to many indexes created, see below article:
    Indexes on a Table
    Hope its not corrupted though. If so, I hope you have a backup in place.
    Try to Compact & Repair and also check below article: 
    Recovering from Corruption
    Hope this helps,
    Daniel van den Berg | Washington, USA | "Anticipate the difficult by managing the easy"
    Please vote an answer helpful if they helped. Please mark an answer(s) as an answer when your question is being answered.

  • Select count from large fact tables with bitmap indexes on them

    Hi..
    I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
    Thx

    You may have corruption in the index if the queries ...
    Select /*+ full(t) */ count(*) from my_table t
    ... and ...
    Select /*+ index_combine(t my_index) */ count(*) from my_table t;
    ... give different results.
    Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them.

  • Joining two fact tables with different dimensions into single logical table

    Hi Gurus,
    I try to accomplish in Oracle Business Intelligence 11.1.1.3.0:
    F1 (D1, D2 and D3)
    F2 (D1 and D2 and D4)
    And we want to build a report F1 F2 D1 D2 D3 D4 to have data for:
    F1 that match only for D1-D2-D3
    and data for
    F2 that match only D1-D2-D4
    all that in one row, so D3 and D4 are not common dimensions.
    I can only do:
    F3 (D1, D2)
    F4 (D1, D2 and D4)
    And report
    F3 F4 D1,D2,D4 (all that in one row, and only D4 is not a common dimension)
    Here is the very good example how to accomplish the scenario 1
    http://108obiee.blogspot.com/2009/08/joining-two-fact-tables-with-different.html
    But looks like it does not work in 11.1.1.3.0
    I get
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 14025] No fact table exists at the requested level of detail: [,,Clients,,Day,ROI,,,,EW_Names,,,,,,,,,,,,,,,,,]. (HY000)
    I am sure I set up everything correctly as advised in the blog but it works with only one not a common dimension
    Is it a bug in 11.1.1.3.0 or something?
    Thanks,
    Kate

    Thanks for all your replies.
    Actually, I've tried the solutions you guys mentioned. Generally speaking, the result should be displayed. However, my scenario is a little bit tricky.
    table Y's figures are not the aggregation of table X for D dimension. Instead, table Y's figures include not only D dimension total, but also others (others do not mean A, B, C dimension). For example, table Y stores all food's figure, while table X stores only drink's figure. D dimension is only about drink's detail. In my scenario, other foods' figure is not provided.
    So, even if I set D dimension to all/total for table X, table X's result is still not the same as table Y.
    Indeed, table Y does not have a column key to join to D dimension's key. So, if I select D dimension and table Y's measures at the same time in BI Answer, result returns no data. Hence, I can't compare table X and table Y's results with selection of D dimension.
    Is there any solution to solve this problem?
    Edited by: TomChan on Jun 3, 2009 9:36 AM

  • MaxDB: Table with many LONG fields does not allow an INSERT: ...?

    Hi,
    I have a table with many LONG fields (28). So far, everythings works fine.
    However, if I add another LONG field I cannot insert a dataset anymore
    (29 LONG fields).
    Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
    Thanks in advance
    Michael
    appendix:
    - Create and Insert command and error message
    - MaxDB version and its parameters
    Create and Insert command and error message
    CREATE TABLE "DBA"."AZ_Z_TEST02"
         "ZTB_ID"               Integer    NOT NULL,
         "ZTB_NAMEOFREPORT"           Char (400) ASCII DEFAULT '',
         "ZTB_LONG_COMMENT"                LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_00"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_01"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_02"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_03"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_04"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_05"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_06"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_07"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_08"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_09"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_10"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_11"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_12"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_13"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_14"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_15"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_16"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_17"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_18"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_19"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_20"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_21"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_22"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_23"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_24"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_25"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_26"         LONG ASCII DEFAULT '',
         PRIMARY KEY ("ZTB_ID")
    The insert command
    INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
    works fine. If I add the LONG field
    "ZTB_LONG_TEXTBLOCK_27"         LONG ASCII DEFAULT '',
    the following error occurs:
        Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
        General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
        INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
    MaxDB version and its parameters
    All db params given by
    dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
    are
    KERNELVERSION                         KERNEL    7.5.0    BUILD 026-123-094-430
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    RESTART_SHUTDOWN                      MANUAL
    SERVERDBFOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      INTERNAL
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         10
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   LOG_001
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   64000
    DATA_VOLUME_NAME_0001                 DAT_0001
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0001                 64000
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    BACKUP_BLOCK_CNT                      8
    LOG_MIRRORED                          NO
    MAXVOLUMES                            22
    MULTIO_BLOCK_CNT                    4
    DELAYLOGWRITER                      0
    LOG_IO_QUEUE                          50
    RESTARTTIME                         600
    MAXCPU                                1
    MAXUSERTASKS                          50
    TRANSRGNS                           8
    TABRGNS                             8
    OMSREGIONS                          0
    OMSRGNS                             25
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        1
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    ROWRGNS                             8
    MINSERVER_DESC                      16
    MAXSERVERTASKS                        20
    _MAXTRANS                             288
    MAXLOCKS                              2880
    LOCKSUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       900
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    USEASYNC_IO                         YES
    IOPROCSPER_DEV                      1
    IOPROCSFOR_PRIO                     1
    USEIOPROCS_ONLY                     NO
    IOPROCSSWITCH                       2
    LRU_FOR_SCAN                          NO
    PAGESIZE                            8192
    PACKETSIZE                          36864
    MINREPLYSIZE                        4096
    MBLOCKDATA_SIZE                     32768
    MBLOCKQUAL_SIZE                     16384
    MBLOCKSTACK_SIZE                    16384
    MBLOCKSTRAT_SIZE                    8192
    WORKSTACKSIZE                       16384
    WORKDATASIZE                        8192
    CATCACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      1632
    INIT_ALLOCATORSIZE                    229376
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    TASKCLUSTER01                       tw;al;ut;2000sv,100bup;10ev,10gc;
    TASKCLUSTER02                       ti,100dw;30000us;
    TASKCLUSTER03                       compress
    MPRGN_QUEUE                         YES
    MPRGN_DIRTY_READ                    NO
    MPRGN_BUSY_WAIT                     NO
    MPDISP_LOOPS                        1
    MPDISP_PRIO                         NO
    XP_MP_RGN_LOOP                        0
    MP_RGN_LOOP                           0
    MPRGN_PRIO                          NO
    MAXRGN_REQUEST                        300
    PRIOBASE_U2U                        100
    PRIOBASE_IOC                        80
    PRIOBASE_RAV                        80
    PRIOBASE_REX                        40
    PRIOBASE_COM                        10
    PRIOFACTOR                          80
    DELAYCOMMIT                         NO
    SVP1_CONV_FLUSH                     NO
    MAXGARBAGECOLL                      0
    MAXTASKSTACK                        1024
    MAX_SERVERTASK_STACK                  100
    MAX_SPECIALTASK_STACK                 100
    DWIO_AREA_SIZE                      50
    DWIO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    FBMLOW_IO_RATE                      10
    CACHE_SIZE                            10000
    DWLRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    DATACACHE_RGNS                      8
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              11
    SEQUENCE_CACHE                        1
    IDXFILELIST_SIZE                    2048
    SERVERDESC_CACHE                    73
    SERVERCMD_CACHE                     21
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    READAHEADBLOBS                      25
    RUNDIRECTORY                          E:\_mp\u_v_dbs\EVERW_C5
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        1
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        0
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       648
    EXTERNAL_DUMP_REQUEST                 NO
    AKDUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    UTILITYPROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    BACKUPHISTFILE                      dbm.knl
    BACKUPMED_DEF                       dbm.mdf
    MAXMESSAGE_FILES                    0
    EVENTALIVE_CYCLE                    0
    _SHAREDDYNDATA                        10280
    _SHAREDDYNPOOL                        3607
    USE_MEM_ENHANCE                       NO
    MEM_ENHANCE_LIMIT                     0
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-13 13:47:17
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
    DIAGSEM                             1
    SHOW_MAX_STACK_USE                    NO
    LOG_SEGMENT_SIZE                      21333
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    0
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_JOIN_OUTER                   YES
    JOIN_OPERATOR_IMPLEMENTATION          IMPROVED
    JOIN_TABLEBUFFER                      128
    OPTIMIZE_FETCH_REVERSE                YES
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             NO
    SHAREDSQL_EXPECTEDSTATEMENTCOUNT      1500
    SHAREDSQL_COMMANDCACHESIZE            32768
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    HASHED_RESULTSET                      NO
    HASHED_RESULTSET_CACHESIZE            262144
    AUTO_RECREATE_BAD_INDEXES             NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FORBID_LOAD_BALANCING                 NO

    >
    Lars Breddemann wrote:
    > Hi Michael,
    >
    > this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
    > Really.
    >
    > Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
    > Anyhow, when I use
    >
    > insert into "AZ_Z_TEST02"  values (87,'','','','','','','','','','','','','','','',''
    >                                           ,'','','','','','','','','','','','','','','','')
    >
    > it works fine.
    It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
    >
    Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
    >
    > Now to the other errors:
    > - 28 Long values per row?
    > What the heck is wrong with the data design here?
    > Honestly, you can save data up to 2 GB in a BLOB/CLOB.
    > Currently, your data design allows 56 GB per row.
    > Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
    >
    > - The "ZTB_NAMEOFREPORT" looks like something the users see -
    > still there is no unique constraint preventing that you get 10000 of reports with the same name...
    You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
    (These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
    >
    - MaxDB 7.5 Build 26 ?? Where have you been the last years?
    > Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
    > With 7.6. I was not able to reproduce your issue at all.
    The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
    All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
    >
    - Are you really putting your data into the DBA schema? Don't do that, ever.
    > DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
    > Create a user/schema for your application data and put your tables into that.
    >
    > KR Lars
    In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
    Michael

  • Oracle 11g imp erroneously tries to recreate existing tables with CLOBs?

    I have a shell script for loading database dumps from both Datapump and the older exp/imp.
    Often when loading dumps, I need to rename the schema owner and tablespace names (which is handled by REMAP_SCHEMA and REMAP_TABLESPACE in Datapump).
    However I have a whole bunch of dumps created with exp at this point and not that many Datapump dumps yet. As such the old style dumps are handled by the shell script in this way:
    1) A first pass imp is run using INDEXFILE to generate a file with the SQL to create tables and indexes. Options also include FROMUSER and TOUSER.
    2) A series of sed command edit the SQL file to change the tablespace names (which are schema owner specific in our case).
    3) The editted SQL file is run with sqlplus to create the tables and indexes.
    4) A second pass imp is run to load the table rows as well as triggers, stored procedures, views, etc. Options include FROMUSER, TOUSER, COMMIT=Y, IGNORE=Y, BUFFER, STATISTICS=NONE, CONSTRAINTS=N
    This shell script has been working great for loading exp dump files into Oracle 9 and Oracle 10 databases, but now that I'm trying to load these dumps into Oracle 11, it fails.
    The problem is in step 4, the imp program is trying to create some of the tables that already were created with sqlplus in step 2. The problematic tables all seem to have CLOB columns in them. The table creation fails because it tries to use the tablespace names from the dump file, which do not exist in the destination database. And when the table creation fails, imp then decides not to load the rows for those table.
    This seems like a bug in the Oracle 11 imp program. I don't understand why it thinks it needs to recreate tables that already exist when those tables have CLOB columns. Is there something different about CLOB columns in Oracle 11 that I should know about that might be confusing imp into thinking that it needs to create tables when they already exist? Maybe I need to do something to those tables in SQL so that imp does not think it needs to recreate them?
    I know that the tables with the CLOBs were created correctly because I was trying to find some way to workaround this. For step 4, I tried using DATA_ONLY=Y, in which case imp does not try to create the tables and just loads the table rows. Of course using DATA_ONLY, I don't get a lot of other things like triggers, view and stored procedures. I started to try to get around that by doing 3 passes with imp, so that I could pick up the missing pieces by using an imp pass with ROWS=N, but strangely that has the same problem of trying to recreate the existing tables.

    The only solution I've found so far as a workaround is rather convoluted.
    1. I took an export using datapump's expdp of SCHEMA1 (in 10g it will skip the table with the xmltype).
    2. I imported the data to my empty schema (SCHEMA2) using impdp. To avoid the error that the type already exists with another OID, I used the TRANSFORM=oid:n parameter e.g.
    impdp user/pwd dumpfile=noxmltable.dmp logfile=importallbutxmltable.log remap_schema=SCHEMA1:SCHEMA2 TRANSFORM=oid:n directory=MYDUMPDIR
    3. I then manually created my xmltype table in the SCHEMA2 and did a select into to load it (make sure you have the select privileges to do so):
    INSERT INTO SCHEMA2.XMLTABLE2 SELECT * FROM SCHEMA1.XMLTABLE1;
    4. I am still taking an export with exp of the xmltable as well even though I'm not sure I can do anything with it.
    Thanks!
    Edited by: stacyz on Jul 28, 2009 9:49 AM

  • Interactive Report - aggregate with hide - bug or feature?

    We are in the process of converting existing reports to Interactive Reports (IR), and adding a bunch of new ones. We ran into a roadblock when using IR with aggregates. To
    reproduce, I created a test table with region, state, county, city and population columns. I created an interactive report, CONTROL BREAK on "Region", aggregate (SUM) on
    population. The result looked like this.
    <p>
    <b>Result #1:</b>
    <p>
    Region: West
    ************<b>
    State  County         City            Population</b>
    CA     Orange County  Irvine          100
    CA     Orange County  Orange          200
    CA     Los Angeles    Hollywood       300
    CA     Los Angeles    Universal City  400<b>
                          Sum           1,000</b><p>
    When I clicked on City column, and chose to hide it, I expected the IR to resummarize the result to look like this:
    <p>
    <b>Result #2:</b>
    <p>
    Region: West
    ************<b>
    State  County         Population</b>
    CA     Orange County  300
    CA     Los Angeles    700
           Sum          1,000</b><p>
    But what I saw was this:
    <p>
    <b>Result #3:</b>
    <p>
    Region: West
    ************<b>
    State  County         Population</b>
    CA     Orange County  100
    CA     Orange County  200
    CA     Los Angeles    300
    CA     Los Angeles    400<b>
           Sum          1,000</b><p>
    The data in the above result #3 is not presented the in way it should be for the users to comprehend.
    <p>
    In the non-interactive version, we display checkboxes for the users to show and hide the columns. If the column "City" is set by the user to "Hide", then we use the
    condition on the column to hide the City column from the report, and use the query "SELECT ..., CASE WHEN :P10_SHOW_CITY = 'YES' THEN city ELSE NULL END As city, SUM
    (population) FROM .. " to summarize the data. This displays results like the one shown in Result #2.
    <p>
    1. Shouldn't IR re-summarize the data to display results like I have shown in Result #2 when a column is hidden? We are able to achieve this with non-interactive version. Is this a
    bug in IR?
    <p>
    2. Is there a way to get interactive report to display results like the way I have shown in Result #2?
    <p>
    3. Is there a way to retrieve the columns that are set to "HIDE" in IR so that we can modify our SQL to resummarize?
    <p>
    <b><u>Script to reproduce:</u></b>
    <p>
    Create table test_ir(
        region varchar2(50),
        state  varchar2(50),
        county varchar2(50),
        City   varchar2(50),
        population number(4))
    insert into test_ir(region, state, county, city, population) values
    ('West', 'CA', 'Orange County', 'Irvine', 100)
    insert into test_ir(region, state, county, city, population) values
    ('West', 'CA', 'Orange County', 'Orange', 200)
    insert into test_ir(region, state, county, city, population) values
    ('West', 'CA', 'Los Angeles', 'Hollywood', 300)
    insert into test_ir(region, state, county, city, population) values
    ('West', 'CA', 'Los Angeles', 'Universal City', 400)
    select region, state, county, city, population from test_ir<br>
    Thanks.<br>
    <br>
    Ravi

    Hi Ravi,
    When you add a control break, we do not re-generate the query to do group by. We just do control breaks. In the current version of APEX, to achieve what you are trying to see, you need to create an Interactive report with group by clause. If you want to use a dynamic query, the solution I suggest will not work.
    The good news is in APEX 4.0, we are working to include group by view to an Interactive Report. This will let you select group by columns and aggregate functions to get the Result #2 you are trying to get.
    If you wish to see how this works, we will be able to show if you come to APEXposed 2008, October 29 - 30, at Chicago O'Hare Wyndham.
    Christina

Maybe you are looking for