Cache groups and # symbol

Hi!
I have oracle table (user kvstr):
create table CHANGES
CHNG_ID NUMBER(10) not null,
DAY DATE default SYSDATE,
VOLUME_# NUMBER(4,2),
NAVI_USER VARCHAR2(30) default USER,
NAVI_DATE DATE default SYSDATE
Then I create TimesTen 7.0.1 (WIN32) cache group:
CREATE USERMANAGED CACHE GROUP "CGRP_CHANGES"
FROM
"KVSTR"."CHANGES" (
"CHNG_ID" NUMBER(10) NOT NULL,
"DAY" DATE,
"VOLUME_#" NUMBER(4,2),
PRIMARY KEY("CHNG_ID"), READONLY
WHERE (TRUNC("KVSTR"."CHANGES"."DAY") = TRUNC(SYSDATE));
Cache group created normally. But when I refresh cache group error occurred:
Command> REFRESH CACHE GROUP ttsys.cgrp_changes COMMIT EVERY 1000 ROWS;
5056: The cache operation fails: error_type=<Oracle Error>, error_code=<1740>, error_message: ORA-01740: missing double quote in identifier
5039: An error occurred while refreshing TTSYS.CGRP_CHANGES: Refresh failed (ORA-01740: missing double quote in identifier
The command failed.
What's wrong?

Hi!
I was also facing the same problem but with out using # the problem gets solved. I guess TimesTen does not supports # in the column names but it is mentioned in the docs that it supports .
Regards
/Ahmad

Similar Messages

  • Cache groups and table join

    Hi,
    Is there any limitation regarding an SQL query doing a JOIN of tables from multiple cache groups?
    Thanks

    No limitations. From a query/DML perspective, cache group tables are just like any other table.
    Chris

  • Please recommend solutions for  Cache Connect and ?

    ---> Solution I
    2 servers create Cache Connect to RDBMS
    ---> Solution II
    1 server create Cache Connect to RDBMS and create active standby pair with another server

    Hi,
    If you only need READONLY caching in TimesTen and all updates will be made in Oracle then you have two main options:
    Multiple READONLY Caches
    For this you have one or more separate TimesTen caches each with a READONLY cache group defined against the oracle DBMS. Each cache can cache different tables/data or they can can cache the same tables/data as required.
    This architecture is very flexible (adding or removing TimesTen servers is very simple) and very scalable. It also provides very good HA; if one cache is down applications can just access a different cache.
    However, due to the asynchronous, time based nature of the refresh from Oracle to Timesten at any moment in times the data in all the caches may not be 100% consistent with each other or Oracle.
    By this I mean the following:
    - Assume that you have 2 (or more) READONLY caches caching the same data from Oracle, with an AUTOREFRESH interval of T1
    - At some time, T2, you update, in Oracle, one of the rows cached by the caches.
    - At some later time, T3, you query the updated row via both caches
    If (T3 - T2) < T1 then the values returned by your query may differ between the caches (depending on where exactly they are in the autorefresh interval when the update is done).
    Active/Standby pair using 2-SAFE replication with READONLY cache group and optional read-only subscribers
    With this architecture you define a TimesTen Active/Standby replicated pair using 2-safe replication and containing the READONLY cache group. 'Scale out' is accomplished in one of three ways:
    1. Adding further A/S pairs with a READONLY cache group
    2. Adding read-only subscriber datastores to the original A/S pair
    3. A mixture of (1) and (2)
    The main advantages of this architecture are as follows:
    1. When 2-Safe is used within the A/S pair, queries to either cache will always return consistent results (i.e. the consistency issue that I described for the first scenario does not exist in this configuration). However, threr can still be inconsitencies in results between the A/S pair and any readonly subscribers (since the replication to them is asynchronous) but given the high performance of TimesTen replication the latency between a change appearing at the A/S pair and the readonly subscribers will typically be a few ms rather than potentially several seconds for the multiple-cache scenario.
    2. The loading on the central Oracle DBMS arising from AuTOREFRESH processing is reduced compared to the multiple-cache scenario. The difference in loading between this solution and the multiple cache solution will be larger as more TT servers are deployed.
    It should be noted that the operational management of this solution is a little more complex than for the first scenario since the A/S pair must be monitored and a 'failover' trigerred if there is some fauilure within the pair.
    Hope that helps a little.
    Chris

  • More than one root table ,how to design cache group ?

    hi,
    each cache group have onle one root table , many child table ,if my relational model is :
    A(id number,name ....,primary key id)
    B(id number,.....,primary key id)
    A_B_rel (aid number,bid number,foreign key aid referenc a (id),
    foreign key bid referenc b(id))
    my select statement is "select ... from a,b,a_b_rel where ....",
    i want to cache these three table , how should i create cache group ?
    my design is three awt , Cache group A for A , Cache Group b for b, Cache group ab to a_b_rel ?
    are there other better solution ?

    As you have discovered, you cannot put all three of these tables into one cache group. For READONLY cache groups the solution is simple, put two of the tables (say A and A_B) in one cache group and the other table (B) in a different cache group and make sure that both use the same AUTOREFRESH interval.
    For your case, using AWT cache groups, the situation is a bit mnore complicated. You must cache the tables as two different cache groups as mentioned above, but you cannot define a foreign key relationship in TimesTen between tables in different cache groups. Hence you will need to add logic to your application to check and enforce the 'missing' foreignb key relationship (B + A_B in this example) to ensure that you do not inadvertently insert data that would violate the FK relationship defined in Oracle. Otherwise you could insert invalid data in TimesTen and this would then fail to propagate to Oracle.
    Chris

  • How to query data from grid cache group after created global AWT group

    It is me again.
    as I mentioned in my previous posts, I am in progress of setup IMDB grid environment, and now I am at stage of creating cache group. and I created global AWT cache group on one node(cachealone2), but I can not query this global cache group from another node(cachealone1)
    thanks Chirs and J, I have done successfully setup IMDB grid env, and have two node in this grid as below
    Command> call ttGridNodeStatus;
    < MYGRID, 1, 1, T, igs_imdb02, MYGRID_cachealone1_1, 10.214.10.176, 5001, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    < MYGRID, 2, 1, T, igsimdb01, MYGRID_cachealone2_2, 10.214.10.119, 5002, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    2 rows found.
    and I create group ATW cache group on cachealone2
    Command> cachegroups;
    Cache Group CACHEUSER.SUBSCRIBER_ACCOUNTS:
    Cache Group Type: Asynchronous Writethrough global (Dynamic)
    Autorefresh: No
    Aging: LRU on
    Root Table: ORATT.SUBSCRIBER
    Table Type: Propagate
    1 cache group found.
    Command> SELECT * FROM oratt.subscriber;
    0 rows found.
    however I can not query this from another node cachealone1
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber;
    2206: Table ORATT.SUBSCRIBER not found
    this is example from Oracle docs, I an not sure where I missed for this. thanks for your help.

    Sounds like you have not created the Global AWT cache groupo in the second datastore? There is a multi-step process needed to roll out a cache grid and various things must be done on each node in the correct order. have you done that?
    Try checking out the QuickStart example here:
    http://download.oracle.com/otn_hosted_doc/timesten/1121/quickstart/index.html
    Chris

  • Table containes GL account and Symbolic account .

    Hi All ,
    Is there any table or Function module Or Bapi  so that we can find out  G/L account ( expense account ) for any symbolic account ???
    Expense account  is maintained against Symbolic account through configuration using path ( Financial Accounting (New ) -> Travel Management ->Travel Expenses ->Transfer Accounting-> Conversion of symbolic account to expense account ) . How can i get this account number if i know symbolic account. Where this data is stored .??
    Thanks In advance.
    Regards,
    Sijin K P

    here we go:
    Payroll > Payroll: your country > Posting to Financial Accounting > Activities in the HR-System > Employee grouping and symbolic accounts > Define symbolic accounts

  • Uanble to create Cache Group from Cache Administrator

    Folks,
    I am attempting to create a cache group from the Cache Administrator.
    I have set all the data source properties and am able to login to the data source but when I attempt to create a cache group i.e. I specify the name & type of cache group, I get this message in red at the bottom saying "Gathering table information, please wait" and... that's it. Nothing happens!
    I am able to move the cursor etc. but the cache group is not defined.
    Anybody have any suggestions as to what I'm doing wrong? Any help would be appreciated!
    keshava

    You cannot have multiple root tables within one cache group. The requirements for putting tables together into one cache group are very strict; there must be one top level table (the root table) and there can optionally be multiple child tables. The child tables must be related via foreign keys either to the root table or to a child table higher in the hierarchy.
    The solution for your case is to put one ofthe root tables and the child table into one cache group and the other root table into a separate cache group. If you do that you need to take care of a few things:
    1. You cannot define any foreign keys between tables in different cache groups in TimesTen (the keys can exist in Oracle) so the application must enforce the referential integrity itself for those cases.
    2. If you load data into one cache group (using LOAD CACHE GROUP or 'load on demand') then Timesten will not automatically load the corresponding data into the other cache group (sicne it does not know about the relationship). The application will need to load the data into the other cache group explicitly.
    There are no issues regarding transactional consistency when changes are pushed to Oracle. TimesTen correctly maintains and enforces transactional consistency regardless of how tables are arranged in cache groups.
    Chris

  • RAC degradation by cache group

    Hi:
    since 3 week ago, I have been seeing that my oracle RAC database has been degraded.
    according to enterprise manager, we have found that timestend process has many commits enqueues. On Oracle RAC there is an application, which is performing close to 200 inserts per second. But in a moment on time, timesten enqueues cause the inserts for this application enqueues also on commits, so my oracle application get delays on commits.
    Cache group on timesten to oracle has been declared as readonly using autorefresh, but this refresh is executing on 1 minute interval. I have 6 cache group using this configuration, the fact is that I have 3 databases using those cache groups and replicas for each of them.
    have Someone had this kind of issue like this ?

    It's just a numerical designation for that particular protocol version of Cache Connect.

  • SMP 2.3.4 purge cache group not working

    Hi to all,
    We have developed a native Android App on SMP 2.3.4. The app uses ondemand cache groups and except some master data synchronizations it uses several create operations to write data to an Oracle database.
    The problem is that, from the SCC it is not possible to purge the Logically Deleted Row Count. The entries are successfully transferred from Active Row Count to Logically Deleted Row Count but purging does nothing.
    At SMP2.1.3 the same scenario works. Logically deleted rows can be removed from CDB by purging. The problem is that users send data to CDB and the to the backend by submit create operations and the data cannot be deleted from CDB even when they are marked as logically deleted.
    The only way to purge the logically deleted data, is to delete the package users from the SCC tab package users.
    Any suggestions? Any workaround? Is it safe to suggest to the customer deleting the package users and then purging the data?
    Thanks

    Hi,
    I always do a sync from the device, so the entries are transferred to logically deleted column. The problem is that purging does not have any effect. According to a note ( 1879977)
    The rows in the SUP Cache Database with the column LOGICAL_DEL set to 1 should only be removed if the Last Modified Date (LMD) for the row in question is older than the oldest synchronization time in the system for a specific Mobile Business Object (MBO). As the oldest synchronization time is not taken into consideration during the purge or during the automatic Synchronization Cache Cleanup task, the rows in question are getting deleted.
    To my understanding, the above can never be true because the apps works as following. The user enters the application, synchronizes to download all the necessary data to the device and the perform some create operations. After, the user synchronizes again in order to send the data back to the backend. So the LMD data will always be newest than the oldest sync time (if understood correctly oldest sync time is the first sync time).
    As a result, all the data stay at the CDB as logically deleted affecting the CDB size. Attaching a screenshot
    For sure at sup 2.1.3 logically deleted entries can be deleted (version before note 1879977). Is there any safe workaround in order to delete unused entries for CDB?
    One last question. Do the data created at the device side (ex data from create operations) are automatically deleted from the device local DB when they are successfully transferred to the back end?
    Thanks
    EDIT: I enabled the partition by requester and device identity at the On demand cache groups that have CREATE operations, and also I added myDB.synchronize(); at the end of all the synchronizations and now my data are somehow automatically purged after reaching the backend! For example, when sending data back the cache group automatically goes from 100 entries to 0 without purging!

  • Aggregate query on global cache group table

    Hi,
    I set up two global cache nodes. As we know, global cache group is dynamic.
    The cache group can be dynamically loaded by primary key or foreign key as my understanding.
    There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
    Oracle:
    1 Java
    2 C
    3 Python
    Node A:
    1 Java
    Node B:
    2 C
    3 Python
    If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
    The questions are:
    how I can get the real count 3?
    Is it reasonable to do this query on global cache group table?
    I have one idea that create another read-only node for aggregation query, but it seems weird.
    Thanks very much.
    Regards,
    Nesta
    Edited by: user12240056 on Dec 2, 2009 12:54 AM

    Do you mean something like
    UPDATE sometable SET somecol = somevalue;
    where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
    This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
    I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
    Chris

  • A qustion about cache group

    hello, chris:
    now we have a situation: we have cache group and replications, but the rep schemes doesn't include the cache groups. when we modify the oraclePwd values, we need recall "call ttcacheuidpwdset(***,***);", then it will appear the errors " The operation cannot be executed while the Replication Agent for this datastore is running.", how can we avoid this situation, we don't want to restart the rep agent, because when the restart process it will appear some timeout in application. thank you...
    the cache group type is readonly cache group
    Edited by: user578558 on 2009-1-15 下午7:42

    There is no way to avoid this situation. Many operations, including setting the cache userid/password, require that the replication agent be stopped while they are executed. This should only be an issue for the application if you are using RETURN RECEIPT or RETURN TWOSAFE replication. In that case when the repagent is stopped the application may receive a return service timeout warning (8170). However, the impact of this can be minimised by ensuring that your replication configuration includes appropriate STORE clauses wtih RETURN SERVICES OFF WHEN REPLICATION STOPPED. However, even with this clause the application may receive one warning when the repagent is stopped. Applications that use RETURN RECEIPT or RETURN TWOSAFE must be coded to expect 8170 warnings and to react accordingly.
    Chris

  • Connect to cache group

    hi ..
    i have 2 dsn timesten. if i add new cache group in first DSN, i can show content of table of the new cache group
    but in second DSN. if i add new cache group and i query table of the new cache group. timesten return zero result.
    why? and how to resolve this problem?
    thanks

    Can you please provide:
    1. Details of exact TimesTen version being used
    2. DSN definitions for both datastores
    3. The steps you are performing at each datastore (in detail).
    4. The result (including any errors etc.) after each step.
    Thanks,
    Chris

  • Grouping Brushes and Symbols

    Please add the possibility to group brushes and symbols. Thanks.

    Plac3en the file in a document when it is placed in the document drag the art into the symbols panel same for the brushes panel and there you will have a choice of what kinfd of brush ou wish to make, you can also drag ity into the watches pamel to make a pattern fill. Y^ou can save the swattchges, bvrushes nd symbols as libraries for use in the future.

  • Synchronous writethrough and  Asynchronous writethrough cache group

    Hi!
    My question is that can we use Passthrough feature in sychronous or asynchronous writethrough.
    and of which level passthrough=0,1,2,3
    please help........
    regards
    USman

    Yes, PassThrough can be used with AWT and SWT cache groups. Any value is allowed but the only values that make sense are 0, 1 and 3. For AWT and SWT, PassThrough=2 is the same as PassThrough=1.
    Chris

  • IMDB Cache group load and long running transaction

    Hello,
    We are investigating the use of IMDB Cache to cache a number of large Oracle tables. When loading the cache I have noticed logs accumulating and I am not quite sure why this should be. I have a read only cache group consisting of 3 tables with approximatley, 88 million rows, 74 million rows and 570 million rows in each table. To load the cache group I run the following -
    LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;ttLogHolds shows -
    Command> call ttLogHolds ;
    < 0, 12161024, Long-Running Transaction      , 1.1310 >
    < 170, 30025728, Checkpoint                    , Entity.ds0 >
    < 315, 29945856, Checkpoint                    , Entity.ds1 >
    3 rows found.I read this as saying from log 0 to current must be kept for the long running transaction. From what I can see the long running transaction is the cache group load. Is this expected? I was expecting the commit in the load cache group to allow the logs to be deleted. I am able to query the contents of the tables at various times in the load so I can see that the commit is taking place.
    Thanks
    Mark

    Hello,
    I couldn't recall whether I had changed the Autocommit settings when I ran the load so I tried a couple more runs. From what I could see the value of autocommit did not influence how the logs were treated. For example -
    1. Autocommit left as the default -
    Connection successful: DSN=Entity;UID=cacheadm;DataStore=/prod100/oradata/ENTITY/Entity;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so;LogDir=/prod100/oradata/ENTITY;PermSize=66000;TempSize=2000;TypeMode=0;OracleNetServiceName=TRAQPP.world;
    (Default setting AutoCommit=1)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction -
    Command> call ttlogholds ;
    < 0, 11915264, Long-Running Transaction      , 1.79 >
    < 474, 29114368, Checkpoint                    , Entity.ds0 >
    < 540, 1968128, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    2011-01-19 14:10:03.135
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: timestenorad
    28427   0x16fd6910            7.26     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69211971680          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69211971680          TRAQDBA.AADNA
                                                       Command   69211971680          S     69211971680         
                                  8.10029  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.10582  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.10477  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.10332  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.10546  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.10261  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.10637  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.10669  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.10111  Active      Database  0x01312d0001312d00   IX    0                   
    Program File Name: ttIsqlCmd
    29317   0xde257d0             1.79     Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211584104          SYS.TABLES
                                                       Command   69211584104          S     69211584104         
    11 outstanding transactions foundAnd the commands were
    < 69211971680, 2048, 1, 1, 0, 0, 1392, CACHEADM                       , load cache group CACHEADM.ER_RO_CG commit every 1000 rows parallel 10 _tt_bulkFetch 4096 _tt_bulkInsert 1000 >
    < 69211584104, 2048, 1, 1, 0, 0, 1400, CACHEADM                       , LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 >Running the load again with autocommit off -
    Command> AutoCommit
    autocommit = 1 (ON)
    Command> AutoCommit 0
    Command> AutoCommit
    autocommit = 0 (OFF)
    Command> LOAD CACHE GROUP er_ro_cg COMMIT EVERY 1000 ROWS PARALLEL 10 ;Logholds shows a long running transaction
    Command>  call ttlogholds ;
    < 1081, 6617088, Long-Running Transaction      , 2.50157 >
    < 1622, 10377216, Checkpoint                    , Entity.ds0 >
    < 1668, 55009280, Checkpoint                    , Entity.ds1 >
    3 rows found.And ttXactAdmin shows only the load running -
    er.oracle$ ttXactAdmin entity                                             
    2011-01-20 07:23:54.125
    /prod100/oradata/ENTITY/Entity
    TimesTen Release 11.2.1.6.1
    Outstanding locks
    PID     Context            TransID     TransStatus Resource  ResourceID           Mode  SqlCmdID             Name
    Program File Name: ttIsqlCmd
    2368    0x12bb37d0            2.50157  Active      Database  0x01312d0001312d00   IX    0                   
                                                       Row       BMUFVUAAAAKAAAAPD0   S     69211634216          SYS.TABLES
                                                       Command   69211634216          S     69211634216         
    Program File Name: timestenorad
    28427   0x2abb580af2a0        7.2358   Active      Database  0x01312d0001312d00   IX    0                   
                                                       Table     718080               W     69212120320          TRAQDBA.ENT_TO_EVIDENCE_MAP
                                                       Table     718064               W     69212120320          TRAQDBA.AADNA
                                                       Command   69212120320          S     69212120320         
                                  8.24870  Active      Database  0x01312d0001312d00   IX    0                   
                                  9.26055  Active      Database  0x01312d0001312d00   IX    0                   
                                 10.25659  Active      Database  0x01312d0001312d00   IX    0                   
                                 11.25469  Active      Database  0x01312d0001312d00   IX    0                   
                                 12.25694  Active      Database  0x01312d0001312d00   IX    0                   
                                 13.25465  Active      Database  0x01312d0001312d00   IX    0                   
                                 14.25841  Active      Database  0x01312d0001312d00   IX    0                   
                                 15.26288  Active      Database  0x01312d0001312d00   IX    0                   
                                 16.24924  Active      Database  0x01312d0001312d00   IX    0                   
    11 outstanding transactions foundWhat I did notice was that TimesTen runs three queries against the Oracle server, the first to select from the parent table, the second to join the parent to the first child and the third to join the parent to the second child. Logholds seems to show a long running transaction once the second query starts. For example, I was monitoring the load of the parent table, checking ttlogholds to watch for a long running transaction. As shown below, a long running transaction entry appeared around 09:01:41 -
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:37 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39278592, Checkpoint                    , Entity.ds1 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    2 rows found.
    Command> select sysdate from dual ;
    < 2011-01-20 09:01:41 >
    1 row found.
    Command> call ttlogholds ;
    < 2427, 39290880, Long-Running Transaction      , 2.50167 >
    < 2580, 22136832, Checkpoint                    , Entity.ds0 >
    < 2929, 65347584, Checkpoint                    , Entity.ds1 >
    3 rows found.This roughly matches the time the query that selects the rows for the first child table started in Oracle
    traqdba@TRAQPP> select sm.sql_id,sql_exec_start,sql_fulltext
      2  from v$sql_monitor sm, v$sql s
      3  where sm.sql_id = 'd6fmfrymgs5dn'
      4  and sm.sql_id = s.sql_id ;
    SQL_ID        SQL_EXEC_START       SQL_FULLTEXT
    d6fmfrymgs5dn 20/JAN/2011 08:59:27 SELECT "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_
                                       MAP"."EVIDENCE_KEY", "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."EVIDENCE_VALUE", "TRAQDBA"
                                       ."ENT_TO_EVIDENCE_MAP"."CREATED_DATE_TIME" FROM "TRAQDBA"."ENT_TO_EVIDENCE_MAP",
                                        "TRAQDBA"."AADNA" WHERE "TRAQDBA"."ENT_TO_EVIDENCE_MAP"."ENTITY_KEY" = "TRAQDBA
                                       "."AADNA"."ADR_ADDRESS_NAME_KEY"
    Elapsed: 00:00:00.00Thanks
    Mark

Maybe you are looking for

  • Data type column size does not match size of values returned

    I've tried searching but couldn't find what I was looking for, sorry if this has been answered or is a duplicate. I am using the (Java) method below to get the column definitions of a table. I am doing this through an ODBC connection (using Oracle OD

  • How to convert OS X 10.3.9 Applescripts to run on Lion?

    I have recently upgraded my computer from a G4 iMac running Panther to an Intel 21" screen model which runs Lion 10.7.3. I found that neither the Set Up Assistant or Migration Assistant worked as far as transferring the contents of the old system and

  • Extra white space around image in CS4

    Hi - We are planning to migrate from Indesign server CS3 to CS4. In order to make sure that the existing templates, created out of CS3 works out of CS4, we were doing some testing with indds created out of CS3. We export the output as jpeg at 300 dpi

  • Motion not exporting transparency....

    after exporting text project from MOTION 2 to FCP HD 5.1.2.my backgrounds are not transparent--even though i exported the files from motion with transparency on--yesterday's project DID export transparency--after second try--is this a corrupt file is

  • Why upgrade from Compliance Calibrator to GRC?

    Hello Experts- My company has yet to implement GRC 5.3. We already utilize SAP Compliance Calibrator by Virsa Systems 4.0, which works fine, so what would be the benefit of upgrading to GRC? Is it really necessary? Is it just to stay current? Also, i