Load cache group with parallel error, 907

hello, chris:
we met another question, when we create a cache group, then load the data with parallel 8, it appeared unique conflict, we check the data but didn't found any data question, so we load the data again without parallel parameter, it works well, all the data load in. then use unload and load with parallel 8 again, it appeared unique confict again, what happend??
thank you...
The script ls:
create readonly cache group FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE
autorefresh
mode incremental
interval 29000 milliseconds
/* state on */
from
FSZW_OCECS.SP_SUBSCRIBER_RELATION (
SUBS_RELATION_ID TT_BIGINT NOT NULL,
PRIVID VARCHAR2(32 BYTE) INLINE NOT NULL,
SUBSID TT_BIGINT,
SWITCH_FLAG VARCHAR2(2 BYTE) INLINE,
DISCOUNT_CODE VARCHAR2(8 BYTE) INLINE NOT NULL,
DISCOUNT_SERIAL TT_INTEGER,
START_DATE DATE NOT NULL,
END_DATE DATE,
MOBILENO VARCHAR2(15 BYTE) INLINE NOT NULL,
APPLY_DATE DATE,
primary key (SUBS_RELATION_ID))
where NODEID = '334' or NODEID IS NULL,
FSZW_OCECS.SP_SUBSCRIBER_ATTRINFO (
SUB_ATTACH_ID TT_BIGINT NOT NULL,
SUBS_RELATION_ID TT_BIGINT,
SUB_ATTACH_INFO VARCHAR2(16 BYTE) INLINE NOT NULL,
SUB_ATTACH_TYPE VARCHAR2(2 BYTE) INLINE,
primary key (SUB_ATTACH_ID),
foreign key (SUBS_RELATION_ID)
references FSZW_OCECS.SP_SUBSCRIBER_RELATION (SUBS_RELATION_ID));
Command> load cache group SP_SUBSCRIBER_RELATION_CACHE commit every 25600 rows PARALLEL 8;
5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<907>, error_message: [TimesTen]TT0907: Unique constraint (SP_SUBSCRIBER_ATTRINFO) violated at Rowid <0x0000000091341e88>
5037: An error occurred while loading FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE:Load failed ([TimesTen]TT0907: Unique constraint (SP_SUBSCRIBER_ATTRINFO) violated at Rowid <0x0000000091341e88>
Command> load cache group FSZW_OCECS.SP_SUBSCRIBER_RELATION_CACHE commit every 25600 rows;
5746074 cache instances affected.

This looks like a bug to me but I haven't been able to find a known candidate. Are you able to log an SR and provide a testcase so we can reproduce it here and verify if it is a new bug? Thanks.

Similar Messages

  • Error loading Cache group but Cache group created with out error

    Hi
    I have created a cache group but when I load that cache group I get following error:
    Command> load cache group SecondCache commit every 1 rows;
    5056: The cache operation fails: error_type=<Oracle Error>, error_code=<972>, error_message:ORA-00972: identifier is too long
    5037: An error occurred while load TESTUSER.SECONDCACHE:Load failed (ORA-00972: identifier too long)
    The command failed.
    Please help.
    Looking forward for your reply.
    /Ahmad

    Hi Chris!
    Thanks for urgent response. I solved my problem to some extent but want to share.
    Acctualy I was having a column named # which is a primary key also. When I change that column name from # to some other name like some characters then the cahe group is loaded successfuly.
    Is there anyway in TimesTen to load columns names # .
    I read in the documentation of TimesTen that it allows columns names as # , so it is the reason it is creating the cache group but fails to load do not know the reason.
    The code for creating cache group is as follows:
    create cache group MEASCache from testuser."MEAS"("UPDATED" number not
    null,"UNOCCUPIEDRECORD" number not null,"VALUECURRENT" number not null,"EQSFREF
    " number not null,"IMPLEMENTED" number not null,"FORMAT" number not null,"#" number not null,primary key("#"))
    When I change the # column to like eg Identity it works fine.
    /Ahmad

  • How to load cache group?

    Dear ChrisJenkins,
    My project has a timesten . There is a table (using read only cache group) in timesten.
    ex :
    create table A as (id number, content varchar(20));
    insert into A values (1, 'a1');
    insert into A values (2, 'a2');
    insert into A values (n, 'an');
    commit;
    The table (A) loaded 10 rows ('a1' --> 'a10'). if I execute the sql following :
    "Load cache group A where id >=2 and id <=11"
    , how will the timesten execute the sql above ?
    I suggest :
    the timesten don't load rows (id=2-->10) because the memory has the rows ,
    the timesten only load rows (id=11) because the memory don't has the row
    Is it true ?
    Thanks,rgds
    TuanTA

    In your example you are using a regular table not a readonly cache group table. If you are using a readonly cache group then the table would be created like this:
    CREATE READONLY CACHE GROUP CG_A
    AUTOREFRESH MODE INCREMENTAL INTERVAL 10 SECONDS STATE PAUSED
    FROM
    ORACLEOWNER.A ( ID NUMBER, CONTENT VARCHAR(20));
    This assumes that the table ORACLEOWNER.A already exists in Oracle with the same schema. The table in TimesTen will start off empty. ALso, you cannot insert, delete or update the rows in this table directly in TimesTen (that is why it is called a READONLY caceh group); if you try you will get an error. All data for this table has to originate in Oracle. Let's say that in Oracle you now do the following:
    insert into A values (1, 'a1');
    insert into A values (2, 'a2');
    insert into A values (10, 'a10');
    commit;
    Stilll the table in TimesTen is empty. We can load the table with the data from Oracle using:
    LOAD CACHE GROUP CG_A COMMIT EVERY 256 ROWS;
    Mow the table in TimesTen had the same rows as the table in Oracle. Also, the LOAD operation changes the AUTOREFRESH state from PAUSED to ON. You still cannot directly insert/update and delete to this table in TimesTen but any data changes arising due to DML executed on the Oracle table will be captured and propagates to TimesTen by the AUTOREFRESH mechanism. If you now did, in Oracle:
    UPDATE A SET CONTENT = 'NEW' WHERE ID = 3;
    INSERT INTO A VALUES (11, 'a11');
    COMMIT;
    Then, after the next autorefresh cycle (every 10 seconds in this example), the table in TimesTen would contain:
    1, 'a1'
    2, 'a2'
    3, 'NEW'
    4, 'a4'
    5, 'a5'
    6, 'a6'
    7, 'a7'
    8, 'a8'
    9, 'a9'
    10, 'a10'
    11, 'a11'
    So, your question cannot apply for READONLY cache groups...
    If you used a USERMANAGED cache group then your question could apply (as long as the cache group was not using AUTOREFRESH and the table had not been marked READONLY). In that case a LOAD CACHE GROUP cpmmand will only load qualifying rows that do not already exist in the cache table in TimesTen. If rows with the same primary key exist in Oracle they are not loaded, even if the other columns have different values to those in TimesTen. Contrast this with REFRESH CACHE GROUP which will replace all matching rows in TimesTen with the rows from Oracle.
    Chris

  • Cpu usage high when loading cache group

    Hi,
    What are the possible reasons that results high cpu usage when loading read-only cache group with big root table (~ 1 million records)? I have tried setting Logging=0 (without cache agent), 1 or 2 but it doesn't help. Are there any other tuning configuration required to avoid high cpu consumption?
    ttVersion: TimesTen Release 6.0.2 (32 bit Solaris)
    Any help would be highly appreciated. Thanks in advance.

    High CPU usage is not necessarily a problem as long as the CPU is being used to do useful work. In that case high CPU usage shows that things are being processed taling maximum advantage of available CPU power. The single most common mistake is to not properly size the primary key hash index in TimesTen. Whenever you create a table with a PK in TimesTen (whetehr it is part of cache group or just a standalone table) myou must always specify the size of the PK hash index using the UNIQUE HASH ON (pk colukns) PAGES = n clause (see the documentation). n should be set to the maximum number of rows expected in the table / 256. The default is sized for a table of just 4000 rows! If you try and load 1M rows into this table we will be wasting a lot of CPU time serially scanning the (very long) hash chains in each bucket for every row inserted...

  • AWT cache group with CacheAwtParallelism

    I have some question.
    TTversion : TimesTen Release 11.2.2.3.0 (64 bit Linux/x86_64) (tt112230:53376) 2012-05-24T09:20:08Z
    We are testing a AWT cache group ( with CacheAwtParallelism=4 ).
    Application(1 process) to the DML generates to TimesTen(DSN=TEST).
    At this point, Are delivered to the 4 parallel DML?
    [TEST]
    Driver=/home/TimesTen/tt112230/lib/libtten.so
    DataStore=/home/TimesTen/DataStore/TEST/test
    PermSize=1024
    TempSize=512
    PLSQL=1
    DatabaseCharacterSet=KO16MSWIN949
    ConnectionCharacterSet=KO16MSWIN949
    OracleNetServiceName=ORACLE
    OraclePWD=tiger
    CachegridEnable=0
    LogBufMB=512
    LogFileSize=1024
    RecoveryThreads=8
    LogBufParallelism=8
    CacheAwtParallelism=4
    ReplicationParallelism=4
    ReplicationApplyOrdering=0
    UID=scott
    PWD=tiger
    Thank you very much.
    GooGyum

    Let me try and elaborate a littleon 'parallel AWT' (and parallel replication). AWt uses the Timesten replicatio ninfrastructure to capture changes made to AWT cached tables and propagate those changes to Oracle DB. The replication infrsatructure captures changes to tables by mining the TimesTen transaction (redo) logs. The replication/AWT capture/propagate/apply processing is completely decoupled from application transaction execution.
    In TimesTen releases earlier than 11.2.2, the replication infrastructure was completely single threaded in terms of capture/propagate/apply. This means that if you have a TimesTen datastore with several application processes, each with multiple threads, all executing DML against TImesten there is just a single replication thread capturing all these changes, propagating them to the target and applying them there. This was clearly a performance bottleneck in some situations. In 11.2.2 the replciation infrastructiure has been parallelised to improve performance. This is a very dififcult task as we still need to guarantee 'correctness' in all scenarios. The implementation tracks both operation and commit order dependencies at the source (i.e. where the transactions are executed) and encodes this dependency information into the replication stream. Changes are captued, propagated and applied in parallel and on the apply side the edependency information is used to ensure that non dependant transactions can be applied in parallel (still subject to commit order enformcement) while dependant transactions are always applied in a serial fashion. So, depending on the actual workload you may see significant performance improvements using parallel replication / parallel AWT.
    Note that parallelism is applied between transactions; there is no parallelism for the operations within an individual transaction.
    In the case mentioned, CacheAwtParallelism=4, this means that up to 4 threads will be used to apply transactions in parallel to Oracle. The actual degree of parallelism obtained is subject to inter-transactional dependencies in the workload and adjusts dynamically in real-time.
    Chris

  • Import with ORACLE error 907 encountered

    Hi All,
    I did export from one of our Oracle 9i (9.2.0.5.0). But when I did import to another database 9i also (9.2.0.7.0) there was error in creating a table:
    IMP-00017: following statement failed with ORACLE error 907:
    " ALTER TABLE "T_DWH_PRODUCT" MODIFY ("PROCESS_LINE" DEFAULT 'n/a'"
    IMP-00003: ORACLE error 907 encountered
    ORA-00907: missing right parenthesis
    Anyone has same experience that can be shared? Thanks.
    Tarman.

    Hi,
    Can u check the export log file shows "Export terminated successfully without warnings" or not.
    Using show=y parameter get the Create script for T_DWH_PRODUCT table.
    Created the table in database and import the table only again using ignore=y
    Best Regards
    RajaBaskart

  • Load xml - xls with OWB error "ORA-20011:....Start of root element expected

    Hi
    I need to load XML document into a table in Oracle warehouse builder.
    I've followed the steps as given in the user guide and I get the following error,
    ORA-20011: Error occurred while loading source XML document into the target database object PURCHASE_ORDERS.
    Base exception: Start of root element expected.
    ORA-06512: at "DC_DWH.WB_XML_LOAD_F", line 12
    ORA-06512: at "DC_DWH.WB_XML_LOAD", line 4
    ORA-06512: at line 7
    The steps are:
    DECLARE
    CONTROL_INFO VARCHAR2(200);
    BEGIN
    CONTROL_INFO := '<OWBXMLRuntime> <XMLSource> <file>c:\xml_test\y.xml</file> </XMLSource> <targets> <target XSLFile="c:\xml_test\y.xsl" dateFormat="yyyy.MM.dd">PURCHASE_ORDERS</target> </targets></OWBXMLRuntime>';
    DC_DWH.Wb_Xml_Load ( CONTROL_INFO );
    COMMIT;
    END;
    where:
    ------------- y.xml ----------------
    <purchaseOrder>
    <id>103123-4</id>
    <orderDate>2000-10-20</orderDate>
    <shipTo country="US">
    <name>Alice Smith</name>
    <street>123 Maple Street</street>
    <city>Mill Valley</city>
    <state>CA</state>
    <zip>90952</zip>
    </shipTo>
    <comment>Hurry, my lawn is going wild!</comment>
    <items>
    <item>
    <partNum>872-AA</partNum>
    <productName>Lawnmower</productName>
    <quantity>1</quantity>
    <USPrice>148.95</USPrice>
    <comment>Confirm this is electric</comment>
    </item>
    <item>
    <partNum>845-ED</partNum>
    <productName>Baby Monitor</productName>
    <quantity>1</quantity>
    <USPrice>39.98</USPrice>
    <shipDate>1999-05-21</shipDate>
    </item>
    </items>
    </purchaseOrder>
    -----------------y.xsl -------------------
    <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:fo="http://www.w3.org/1999/XSL/Format" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:fn="http://www.w3.org/2004/07/xpath-functions" xmlns:xdt="http://www.w3.org/2004/07/xpath-datatypes">
    <SHIPTO_NAME>
    <xsl:value-of select="purchaseOrder/shipTo/name"/>
    </SHIPTO_NAME>
    <SHIPTO_STREET>
    <xsl:value-of select="purchaseOrder/shipTo/street"/>
    </SHIPTO_STREET>
    <SHIPTO_CITY>
    <xsl:value-of select="purchaseOrder/shipTo/city"/>
    </SHIPTO_CITY>
    </xsl:stylesheet>
    Any help is appreciated

    Hello,
    The error occurs as far as your XSL file has incorrect structure. Your have to transform incoming XML in so-called (by Oracle) canonical form i.e.:
    <ROWSET>
    <ROW>
    <FIELD_1>value-for-field-1</FIELD_1>
    <FIELD_N>value-for-field-N</FIELD_N>
    </ROW>
    </ROWSET>
    So, assuming your table has three fields : SHIPTO_NAME, SHIPT_STREET, SHIPTO_CITY, - your XSL file should look like following:
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >
    <xsl:template match=”/”>
    <ROWSET>
    <xsl:apply-templates select=”purchaseOrder”/>
    </ROWSET>
    </xsl:template>
    <xsl:template match=”purchaseOrder”>
    <ROW>
    <SHIPTO_NAME><xsl:value-of select=”shipTo/name”/></SHIPTO_NAME>
    <SHIPTO_STREET><xsl:value-of select=”shipTo/street”/></SHIPTO_STREET>
    <SHIPTO_CITY><xsl:value-of select=”shipTo/city”/></SHIPTO_CITY>
    </ROW>
    </xsl:template>
    </xsl:stylesheet>
    Hope this will help.

  • Recovery loads then stops with an error message

    I bought  a HP Pavillion A1600n. When I bought the PC the tech department made me a set of recovery discs before I left the store. Eventually I had a problem with the computer and needed to run restore. The restore dics did not work. I called HP and ordered a set directly from them. Cost me $35.00. Tried them and they didn't work. Gave up. Have been using my netbook for over a year (also an HP). I decided I was tired of using a small screen soI went looking for a new computer. Husband suggested I try recovery discs one more time. So I did. First I set everything back to factory defaults and put in the discs. Recovery started right up! Great I thought because I really LOVED this computer. It gets al the way to the end restarts my computer. Goes into the one set up then gives me an error message.
    SW Build ID different from ID on Recovery Media. Please contact the Customer Support Center to order a replacement set of discs for your PC.
    I got on the HP website. Looked through Recovery Troubleshooting it said to make sure that the build number on the discs is EXACTLY the same as the build number of the PC. Well it's off by one number. So I called HP tech support twice. They have no record of my order for the discs and told me that the difference in the Build ID numbers doesn't matter. What? According to setup and their website they do.
    Like I said I LOVED this computer. Really frustrated that I get conflicting information and it's a prefectly good computer that just needs a restore. Now I guess I will have to replace it .  And I'm thinking NOT with an HP since online troubleshooting and call-in tech support disagree.

    Hello nbtrouble, You are correct about the Build ID Numbers. The HP Recovery Restore Disk set must be the exact Build Numbers as the ones listed on your system.
    There may be another way to restore your system without these HP Recovery Restore Disks, if the hard drive is still working properly and the HP Factory restore partition D: is still intact and uncorrupted.
    Here  is a link that has some information on how to do this.
    Look in the Recovery during startup section, for the information on how to enter the HP Recovery Restore Utility during startup.
    Please click the White Kudos star on the left, to say thanks.
    Please mark Accept As Solution if it solves your problem.

  • IMDB Cache group load and long running transaction

    Hello,
    We are investigating the use of IMDB Cache to cache a number of large Oracle tables. When loading the cache I have noticed logs accumulating and I am not quite sure why this should be. I have a read only cache group consisting of 3 tables with approximatley, 88 million rows, 74 million rows and 570 million rows in each table. To load the cache group I run the following -
    LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;ttLogHolds shows -
    Command> call ttLogHolds ;
    < 0, 12161024, Long-Running Transaction      , 1.1310 >
    < 170, 30025728, Checkpoint                    , Entity.ds0 >
    < 315, 29945856, Checkpoint                    , Entity.ds1 >
    3 rows found.I read this as saying from log 0 to current must be kept for the long running transaction. From what I can see the long running transaction is the cache group load. Is this expected? I was expecting the commit in the load cache group to allow the logs to be deleted. I am able to query the contents of the tables at various times in the load so I can see that the commit is taking place.
    Thanks
    Mark

    Hello,
    I couldn't recall whether I had changed the Autocommit settings when I ran the load so I tried a couple more runs. From what I could see the value of autocommit did not influence how the logs were treated. For example -
    1. Autocommit left as the default -
    Connection successful: DSN=Entity;UID=cacheadm;DataStore=/prod100/oradata/ENTITY/Entity;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so;LogDir=/prod100/oradata/ENTITY;PermSize=66000;TempSize=2000;TypeMode=0;OracleNetServiceName=TRAQPP.world;
    (Default setting AutoCommit=1)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction -
    Command> call ttlogholds ;
    < 0, 11915264, Long-Running Transaction      , 1.79 >
    < 474, 29114368, Checkpoint                    , Entity.ds0 >
    < 540, 1968128, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    2011-01-19 14:10:03.135
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: timestenorad
    28427   0x16fd6910            7.26     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69211971680          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69211971680          TRAQDBA.AADNA
                                                       Command   69211971680          S     69211971680         
                                  8.10029  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.10582  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.10477  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.10332  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.10546  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.10261  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.10637  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.10669  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.10111  Active      Database  0x01312d0001312d00   IX    0                   
    Program File Name: ttIsqlCmd
    29317   0xde257d0             1.79     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211584104          SYS.TABLES
                                                       Command   69211584104          S     69211584104         
    11 outstanding transactions foundAnd the commands were
    < 69211971680, 2048, 1, 1, 0, 0, 1392, CACHEADM                       , load cache group CACHEADM.ER_RO_CG commit every 1000 rows parallel 10 _tt_bulkFetch 4096 _tt_bulkInsert 1000 >
    < 69211584104, 2048, 1, 1, 0, 0, 1400, CACHEADM                       , LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 >Running the load again with autocommit off -
    Command> AutoCommit
    autocommit = 1 (ON)
    Command> AutoCommit 0
    Command> AutoCommit
    autocommit = 0 (OFF)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction
    Command>  call ttlogholds ;
    < 1081, 6617088, Long-Running Transaction      , 2.50157 >
    < 1622, 10377216, Checkpoint                    , Entity.ds0 >
    < 1668, 55009280, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    er.oracle$ ttXactAdmin entity                                             
    2011-01-20 07:23:54.125
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: ttIsqlCmd
    2368    0x12bb37d0            2.50157  Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211634216          SYS.TABLES
                                                       Command   69211634216          S     69211634216         
    Program File Name: timestenorad
    28427   0x2abb580af2a0        7.2358   Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69212120320          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69212120320          TRAQDBA.AADNA
                                                       Command   69212120320          S     69212120320         
                                  8.24870  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.26055  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.25659  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.25469  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.25694  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.25465  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.25841  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.26288  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.24924  Active      Database  0x01312d0001312d00   IX    0                   
    11 outstanding transactions foundWhat I did notice was that TimesTen runs three queries against the Oracle server, the first to select from the parent table, the second to join the parent to the first child and the third to join the parent to the second child. Logholds seems to show a long running transaction once the second query starts. For example, I was monitoring the load of the parent table, checking ttlogholds to watch for a long running transaction. As shown below, a long running transaction entry appeared around 09:01:41 -
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:37 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39278592, Checkpoint                    , Entity.ds1 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    2 rows found.
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:41 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39290880, Long-Running Transaction      , 2.50167 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    < 2929, 65347584, Checkpoint                    , Entity.ds1 >
    3 rows found.This roughly matches the time the query that selects the rows for the first child table started in Oracle
    traqdba@TRAQPP> select sm.sql_id,sql_exec_start,sql_fulltext
      2  from v$sql_monitor sm, v$sql s
      3  where sm.sql_id = 'd6fmfrymgs5dn'
      4  and sm.sql_id = s.sql_id ;
    SQL_ID        SQL_EXEC_START       SQL_FULLTEXT
    d6fmfrymgs5dn 20/JAN/2011 08:59:27 SELECT "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_
                                       MAP"."EVIDENCE_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."EVIDENCE_VALUE", "TRAQDBA"
                                       ."ENT_TO_EVIDENCE_MAP"."CREATED_DATE_TIME" FROM "TRAQDBA"."ENT_TO_EVIDENCE_MAP",
                                        "TRAQDBA"."AADNA" WHERE "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY" = "TRAQDBA
                                       "."AADNA"."ADR_ADDRESS_NAME_KEY"
    Elapsed: 00:00:00.00Thanks
    Mark

  • About cache group

    一个程序向TimesTen的数据表中插入数据能正常运行。但这个表和ORACLE做Cache Group时就不行。
    I have a wired problem: a program can insers data into a table of TimesTen when there is no Cache Group with oracle.
    However, it can not do this while it is connected to oracle using cache group. Any idea why this happens?
    error message:
    *** ERROR in tt_main.c, line 90:
    *** [TimesTen][TimesTen 7.0.3.0.0 ODBC Driver][TimesTen]TT5102: Cannot load backend library 'libclntsh.so' for Cache Connect.
    OS error message 'ld.so.1: test_C: ???: libclntsh.so: ????: ???????'. -- file "bdbOciFuncs.c", lineno 257,
    procedure "loadSharedLibrary()"
    *** ODBC Error/Warning = S1000, Additional Error/Warning = 5102

    I think I can exculde the above possibilities, as I have checked all the settings above.
    We could use SQL statements as input, and inserting and query can be done at both ends.
    It is only the program that does not work. My "connection string" is the following:
    connstr=====DSN=UTEL7;UID=utel7;PWD=utel7;AutoCreate=0;OverWrite=0;Authenticate=1
    Maybe it is a mistaken properity, or permission, or a switch parameter? Please give some suggestions.
    Thank you very much.
    Create cache group command is:
    Create Asynchronous Writethrough Cache Group utel7_load
    From
    utel7.load(col0 binary_float, col1 binary_float ......
    My odbc.ini is the following:
    # Copyright (C) 1999, 2007, Oracle. All rights reserved.
    # The following are the default values for connection attributes.
    # In the Data Sources defined below, if the attribute is not explicitly
    # set in its entry, TimesTen 7.0 uses the defaults as
    # specified below. For more information on these connection attributes,
    # see the accompanying documentation.
    # Lines in this file beginning with # or ; are treated as comments.
    # In attribute=_value_ lines, the value consists of everything
    # after the = to the end of the line, with leading and trailing white
    # space removed.
    # Authenticate=1 (client/server only)
    # AutoCreate=1
    # CkptFrequency (if Logging == 1 then 600 else 0)
    # CkptLogVolume=0
    # CkptRate=0 (0 = rate not limited)
    # ConnectionCharacterSet (if DatabaseCharacterSet == TIMESTEN8
    # then TIMESTEN8 else US7ASCII)
    # ConnectionName (process argv[0])
    # Connections=64
    # DatabaseCharacterSet (no default)
    # Diagnostics=1
    # DurableCommits=0
    # ForceConnect=0
    # GroupRestrict (none by default)
    # Isolation=1 (1 = read-committed)
    # LockLevel=0 (0 = row-level locking)
    # LockWait=10 (seconds)
    # Logging=1 (1 = write log to disk)
    # LogAutoTruncate=1
    # LogBuffSize=65536 (measured in KB)
    # LogDir (same as checkpoint directory by default)
    # LogFileSize=64 (measured in MB)
    # LogFlushMethod=0
    # LogPurge=1
    # MatchLogOpts=0
    # MemoryLock=0 (HP-UX, Linux, and Solaris platforms only)
    # NLS_LENGTH_SEMANTICS=BYTE
    # NLS_NCHAR_CONV_EXCP=0
    # NLS_SORT=BINARY
    # OverWrite=0
    # PermSize=2 (measured in MB; default is 2 on 32-bit, 4 on 64-bit)
    # PermWarnThreshold=90
    # Preallocate=0
    # PrivateCommands=0
    # PWD (no default)
    # PWDCrypt (no default)
    # RecoveryThreads=1
    # SQLQueryTimeout=0 (seconds)
    # Temporary=0 (data store is permanent by default)
    # TempSize (measured in MB; default is derived from PermSize,
    # but is always at least 6MB)
    # TempWarnThreshold=90
    # TypeMode=0 (0 = Oracle types)
    # UID (operating system user ID)
    # WaitForConnect=1
    # Oracle Loading Attributes
    # OracleID (no default)
    # OraclePWD (no default)
    # PassThrough=0 (0 = SQL not passed through to Oracle)
    # RACCallback=1
    # TransparentLoad=0 (0 = do not load data)
    # Client Connection Attributes
    # ConnectionCharacterSet (if DatabaseCharacterSet == TIMESTEN8
    # then TIMESTEN8 else US7ASCII)
    # ConnectionName (process argv[0])
    # PWD (no default)
    # PWDCrypt (no default)
    # TTC_Server (no default)
    # TTC_Server_DSN (no default)
    # TTC_Timeout=60
    # UID (operating system user ID)
    [ODBC Data Sources]
    TT_tt70=TimesTen 7.0 Driver
    TpcbData_tt70=TimesTen 7.0 Driver
    TptbmDataRepSrc_tt70=TimesTen 7.0 Driver
    TptbmDataRepDst_tt70=TimesTen 7.0 Driver
    TptbmData_tt70=TimesTen 7.0 Driver
    BulkInsData_tt70=TimesTen 7.0 Driver
    WiscData_tt70=TimesTen 7.0 Driver
    RunData_tt70=TimesTen 7.0 Driver
    CacheData_tt70=TimesTen 7.0 Driver
    Utel7=TimesTen 7.0 Driver
    TpcbDataCS_tt70=TimesTen 7.0 Client Driver
    TptbmDataCS_tt70=TimesTen 7.0 Client Driver
    BulkInsDataCS_tt70=TimesTen 7.0 Client Driver
    WiscDataCS_tt70=TimesTen 7.0 Client Driver
    RunDataCS_tt70=TimesTen 7.0 Client Driver
    # Instance-Specific System Data Store
    # A predefined instance-specific data store reserved for system use.
    # It provides a well-known data store for use when a connection
    # is required to execute commands.
    [TT_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/TT_tt70
    DatabaseCharacterSet=US7ASCII
    # Data source for TPCB
    # This data store is created on connect; if it doesn't already exist.
    # (AutoCreate=1 and Overwrite=0). For performance reasons, database-
    # level locking is used. However, logging is turned on. The initial
    # size is set to 16MB.
    [TpcbData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TpcbData
    DatabaseCharacterSet=US7ASCII
    PermSize=16
    WaitForConnect=0
    Authenticate=0
    # Data source for TPTBM demo
    # This data store is created everytime the benchmark is run.
    # Overwrite should always be 0 for this benchmark. All other
    # attributes may be varied and performance under those conditions
    # evaluated. The initial size is set to 20MB and durable commits are
    # turned off.
    [TptbmData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmData
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Source data source for TPTBM demo in replication mode
    # This data store is created everytime the replication benchmark demo
    # is run. This datastore is set up for the source data store.
    [TptbmDataRepSrc_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmDataRepSrc_tt70
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Destination data source for TPTBM demo in replication mode
    # This data store is created everytime the replication benchmark demo
    # is run. This datastore is set up for the destination data store.
    [TptbmDataRepDst_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/TptbmDataRepDst_tt70
    DatabaseCharacterSet=US7ASCII
    PermSize=20
    Overwrite=0
    Authenticate=0
    # Data source for BULKINSERT demo
    # This data store is created on connect; if it doesn't already exist
    # (AutoCreate=1 and Overwrite=0).
    [BulkInsData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/BulkInsData
    DatabaseCharacterSet=US7ASCII
    LockLevel=1
    PermSize=32
    WaitForConnect=0
    Authenticate=0
    # Data source for WISCBM demo
    # This data store is created on connect if it doesn't already exist
    # (AutoCreate=1 and Overwrite=0). For performance reasons,
    # database-level locking is used. However, logging is turned on.
    [WiscData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/WiscData
    DatabaseCharacterSet=US7ASCII
    LockLevel=1
    PermSize=16
    WaitForConnect=0
    Authenticate=0
    # Default Data source for TTISQL demo and utility
    # Use default options.
    [RunData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/RunData
    DatabaseCharacterSet=US7ASCII
    Authenticate=0
    # Sample Data source for the xlaSimple demo
    # see manual for discussion of this demo
    [Sample_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/Sample
    DatabaseCharacterSet=US7ASCII
    TempSize=16
    PermSize=16
    Authenticate=0
    # Sample data source using OracleId.
    [CacheData_tt70]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/info/DemoDataStore/CacheData
    DatabaseCharacterSet=US7ASCII
    OracleId=MyData
    PermSize=16
    # New data source definitions can be added below. Here is my datastore!!!
    [Utel7]
    Driver=/oracle/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/oracle/timesten/TimesTen/tt70/tt70_data/utel7
    DatabaseCharacterSet=ZHS16GBK
    Uid=utel7
    Authenticate=0
    OracleID=db3
    OraclePWD=utel7
    PermSize=6000
    Connections=20
    #permsize*20%
    TempSize=400
    CkptFrequency=600
    CkptLogVolume=256
    LogBuffSize=256000
    LogFileSize=256

  • Drop cache group in timesten 11.2.1

    Hello,
    I am trying to drop an asynchronous cache group in timesten. I follow the below steps to do so:
    a) I use the connection string with the DSN, UID, PWD, OracleID, OraclePWD specified
    b) If replication policy is 'always', change it to 'manual'
    c) Stop replication
    d) Drop the AWT cache group (+drop cache group cachegroupname;+)
    e) Create the modified AWT
    f) Start replication
    g) Set replication policy back to 'always'
    After step (d), I get the following error:
    Command> drop cache group cachegroupname;
    +5219: Temporary Oracle connection failure error in OCIServerAttach(): ORA-12541: TNS:no listener rc = -1+
    +5131: Cannot connect to backend database: OracleNetServiceName = "servicename", uid = "inputuid", pwd is hidden, TNS_ADMIN = "/opt/TT/linux/info", ORACLE_HOME= "/opt/TT/linux/ttoracle_home/instantclient_11_1"+
    +5109: Cache Connect general error: BDB connection not open.+
    The command failed.
    Command>
    Does the error suggest that cache connect has a problem? Should I restart the timesten daemon and try again? Please let me know what the real problem is.
    Let me know if you need information.
    Thanks,
    V

    The SQL*Plus problem is simply because you don't have all the correct directories listed in LD_LIBRARY_PATH. It's likely that your .profile (or equivalenbt) was setting those based on ORACLE_HOME and if this is now unset that could be he problem. Check that LD_LIBRARY_PATH is set properly and this problem will go away.
    The character set issues is potentially more problematic. it is mandatory that the Database character set used by TimesTen exactly matches that of Oracle DB when TT is being used as a cache. If the character sets truly are different then this is very serious and you need to rectify it as many things will fail otherwise. You either need to switch Oracle DB back to US7ASCII (this is probably a big job) or you need to change the TT character set to WE8MSWIN1252.
    To accomplish the latter you would:
    1. Take a backup of the TT datastore using ttBackup (just for safety).
    2. For any non-cache tables (i.e. TT only tables), unload data to flat files using ttBulkCp -o ...
    3. Save the schema for the datastore using ttSchema.
    4. Stop cache and replication agents.
    5. Ensure datastore is unloaded from memory and then destroy the datastore (ttDestroy)
    6. Edit sys.odbc.ini to change Datastore character set.
    7. Connect to datastore as instance administrator (to create datastore). Create all necessary users and grant required privileges.
    8. Set the cahce userid/password (call ttCacheUidPwdSSet(...,...))
    9. Start the cache agent.
    10. Run the SQL script generated by ttSchema to re-create all database objects (tables and cache groups etc.)
    11. Re-populate all non-cache tables from the flat files using ttBulkCp -i
    12. Re-load all cache groups using LOAD CACHE GROUP ...
    13. restart replication agent.
    That's pretty much it (hopefully I have not missed out any vital step).
    Chris

  • Load Cache and UnLoad Cache Problem

    Hi,
    I have inserted 150k Rows from TimesTen and it is been replicated successfully from TimesTen to my Oracle DB.I checked no of rows in TimesTen and Oracle ,showing same rows as 150K rows.
    AT Oracle End
    Count starts for me is from 2 so 153599 rows i will be getting 2 to 153600 rows
    SQL> Select Count(*) from oratt.test_rep;
    COUNT(*)
    153599
    SQL> Select Col108 from oratt.test_rep where Col108=153600;
    COL108
    153600
    SQL> Update oratt.test_rep set Col108=Col108+1 where Col108=153600;
    1 row updated.
    SQL> Select Col108 from oratt.test_rep where Col108=153600;
    no rows selected
    SQL> Select Col108 from oratt.test_rep where Col108=153601;
    COL108
    153601
    AT TimesTen End
    Command> UNLOAD CACHE GROUP CACHEADMIN.TESTCACHE;
    Command> LOAD CACHE GROUP CACHEADMIN.TESTCACHE COMMIT every 1000 Rows;
    153599 cache instances affected.
    Command> Select Col108 from oratt.test_rep where Col108=153600;
    < 153600 >
    1 row found.
    Command> Select Col108 from oratt.test_rep where Col108=153601;
    5213: Bad Oracle login error in OCISessionBegin(): ORA-01017: invalid username/password; logon denied rc = -1
    5131: Cannot connect to backend database: OracleNetServiceName = "MYDB", uid = "Userid", pwd is hidden, TNS_ADMIN = "", ORACLE_HOME= ""
    5109: Cache Connect general error: BDB connection not open.
    0 rows found.
    The command failed.
    Command> cachegroups;
    Cache Group CACHEADMIN.TESTCACHE:
    Cache Group Type: Asynchronous Writethrough (Dynamic)
    Autorefresh: No
    Aging: LRU on
    Root Table: ORATT.TEST_REP
    Table Type: Propagate
    Why i am getting this error i have update my row in Oracle but it is not LOADED in TimesTen, the old value is there in TimesTen .
    Thanks!

    This is a dynamic cache group so when you run a dynamic load capable statement such as this Select Col108 from oratt.test_rep where Col108=*153600;* (presumably Col108 is a key? column) then if there are no matching rows in TiemsTen, TimesTen will attempt to go to oracle to fetch the row(s). These rows will then be inserted into the Tt cache (for future access) as well as being returned to the application. The error occurs because your ttIsql session does not have correct credentials for Orcle (maybe you omitted the OraclePWD= attribute when you connected to ttIsql?).
    If you do not want/need this dynamic load behaviour then you should create the cache group as a non-dynamic cache group.
    With regard to your question about bi-directional cache groups, no we do not support those. If you do change data in the Oracle table which is cached by executing DML against it directly in Oracle then those changes may get overwritten by later changes propagated from TimesTen. If your workload is partitioned so that different sets of rows are updated in Oracle versus TimesTen then that is okay of course. Any updates made in Oracle will not automatically be propagated to TimesTen. You can manually refresh the cache group to pick up any new data if you want to.
    Chris

  • Unloading a large cache group

    Hi,
    We have a read only cache group consisting of three tables. I am able to load this cache group in approximately 40 minutes using parallelism on the Oracle tables and on the load cache group statement. The cache group has just over 93 million rows. We have a requirement where we need to update a number of rows in one of the Oracle tables (approximately 6 million Oracle rows). The approach I had planned to take was -
    1. Alter the cache group to set the AUTOREFRESH state to OFF.
    2. Unload the cache group.
    3. Perform the update on the Oracle table
    4. Alter the cache group to set the AUTOREFRESH state to PAUSED.
    5. Load the cache group.
    I tested this in our pre-production environment which has similiar sizes to production and I found the unload of the cache group took just under 4 hours to complete. While it was running I was issuing a number of ttxactadmin commands against the datastore and it seemed most of the time the process had a TransStatus of "Committing". When I ran an strace against the process I could see a lot of reading happening against the log files. Is this behaviour correct? i.e.: should it take this long to unload a cache group? Is there a better way to perform a mass update like this on the Oracle base table?
    Thanks
    Mark

    Hi,
    With the current implementation of TimesTen, committing or rolling back very large transactions is very slow and results in a lot of disk I/O as TimesTen works throught all the log records for the transatcion on disk in order to reclaim space (the reclaim phase pof commit and rollback processing). The trick is to keep transactions relatively small (few thousand rows at most). For 'smaller' transactions TimesTen does not need to go to disk and commit/rollbakc is much faster.
    The best way to unload a very large number of rows is to repeatedly execute the sequence:
    UNLOAD CACHE GROUP mycg WHERE rownum <= 10000;
    commit;
    in a loop until it indicates that no rows were unloaded. If you are using TimesTen 11.2.1 then this logic could easily be incorporated into a PL/SQL procedure for ease of use.
    Chris

  • Aggregate query on global cache group table

    Hi,
    I set up two global cache nodes. As we know, global cache group is dynamic.
    The cache group can be dynamically loaded by primary key or foreign key as my understanding.
    There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
    Oracle:
    1 Java
    2 C
    3 Python
    Node A:
    1 Java
    Node B:
    2 C
    3 Python
    If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
    The questions are:
    how I can get the real count 3?
    Is it reasonable to do this query on global cache group table?
    I have one idea that create another read-only node for aggregation query, but it seems weird.
    Thanks very much.
    Regards,
    Nesta
    Edited by: user12240056 on Dec 2, 2009 12:54 AM

    Do you mean something like
    UPDATE sometable SET somecol = somevalue;
    where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
    This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
    I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
    Chris

  • Suggestions required for Read-only cache group in timesten IMDB cache

    Hi
    In IMDB Cache , If the underlying oracle RAC is having two schemas ( "KAEP" & "AAEP" , having same sturcture and same name of objects ) and want to create a Read-only cache group with AS pair in timesten.
    Schema                                              
        KAEP  
    Table  
        Abc1
        Abc2
        Abc3                                    
    Schema
        AAEP
    Table
        Abc1
        Abc2
        Abc3
    Can a read-only cache group be created using union all query  ?
    The result set of the cache group should contain both schema records in timesten read-only cache group will it be possible ?
    Will there be any performance issue?

    You cannot create a cache group that uses UNION ALL. The only 'query' capability in a cache group definition is to use predicates in the WHERE clause and these must be simple filter predicates on the  tables in the cache group.
    Your best approach is to create separate cache groups for these tables in TimesTen and then define one or more VIEWS using UNION ALL in TimesTen in order to present the tables in the way that you want.
    Chris

Maybe you are looking for

  • Pattern generation doesnt work - DIOHS96

    Hello I have a trouble getting the DAQmx to do a pattern generation as expected. the first mode of operation, is when i set the clock configuration to "sample clock": the card should support 2MHz operation, however, above about 2KHz, the card spits o

  • Sync errors for iPad2 with iOS5

    Updated iPad2 to iOS5. Now keep getting sync errors, like: cannot finish sync process. Anyone with same problem and possible answer?

  • Regarding Error handling and customisation in RFC, Idocs and Proxy

    Hi, I wanted to compare RFC, Proxy and Idocs on the basis of the amount of customisation required in ECC when sending data from ECC to XI using the above stated means and also the error handling provided in these methods. Please let me know in detail

  • Windows 7 WORKGROUP authentication error on xp clients

    Hello everybody,  first of all sorry for my bad English. I have a Windows 7 Ultimate x64 PC in my Office as a server for sharing my files on a external hard drive. I create a WORKGROUP and connect 5 other PCs to this system using a HUB, 4 PCs have wi

  • My view is just "loading"

    I want to call my view page2 from my view main.htm, but my page is just "loading", and nothing else happens My controller "ZCL_MOBILE_DEMO_CONTROLLER1" has the method DO_REQUEST, in where I have following code:   dispatch_input( ).   request->get_for