How Insert Work on global cache group?

Hi all , i'm doing some test about how many transactions for second TimesTen can process.
With a normal configuration "direct" i reached 5200 transaction for second, on my machine (OS windows normal work station).
now i'm using the global cache groups because we need more then one DataSource , and they have to be syncro, one with each other.
And how i read in the guide the global cache group are perfect for this purpose.
After configured the 2 environment with different DataBase TimesTen (those machine are server SUN, much better of my work station :P), i tried a simple test
of insert on a single node.
But i reached only 1500 as maximum value of transactions for second.
The 5200 value when testing on my work station was with normal Dynamic Cache Group, not Global. So i was thinking if this performance issue was related on how the Insert statement work on a global cache group.
Some questions:
1) before the insert is done on Oracle, the Cache Group do some query on the other cache global group to avoid some conflicts on primary key?
2) there is any operation performed from global cache to others when a statement is sendend?
The 2 global cache anyway are working well, locking and changing owner on a instance cache so no problems detected atm are about " how they have to work":).
The problem is only that we need that the global cache do it more and more faster :P at last the 5200 transaction for second i reached on my work station.
Thanks in advance for any suggestion.
P.S.:I don't know much about the server configuraion (SO solaris some version) but anyway good machines :).

Okay, the rows here are quite large so you need to do some tuning. In the ODBC (DSN) parameters I see that you are using the default log buffer abd log file sizes. these are totally inadequate for this kind of workload. You should increase both to a larger value. For this kind of workloads typial values would be in the range of 256 MB to 1024 MB for both log buffer and log file size. If you are using 32-bit TimesTen you may be constrained on how large you can make these sicne the log buffer is part of the overall datastore memory allocation wh9ich on 32-bit platforms is quite limited. On 64-bit TimesTen you have no such restriction (as long as the machine has enough memory). Here is an example of the directives you would use to set both to 1 GB. The key one is the log buffer size but it is important that LogFileSize is >= LogBufMB.
[my_ds]
LogBufMB=1024
LogFileSize=1024
For this change to take effect you need to shutdown (unload from memory) and restart (load back into memory) the datastore.
Secondly, it's hard to be sure from your example code but it looks like maybe you are pre-paring the INSERT each time you execute it? If that is the case this is very expensive and unnecessary. You only need to prepare once and then you can execute many times as follows:
insPs = connection.prepareStatement("Insert into test.transactions (ID_ ,NUMBE,SHORT_CODE,REQUEST_TIME) Values (?,?,?,?)");
for (int i=1; i < 1000000; i++)
insPs.setString(1,""+getSequence());
insPs.setString(2,"TEST_CODE");
insPs.setString(3,"TT Insert test");
insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
insPs.execute();
connection.commit();
This should improve performance noticeably. mif you can get away with only comiting every 'N' inserts you will see a further uplift. For example:
int COMMIT_INTVL = 100;
for (int i=1; i < 1000000; i++)
insPs.setString(1,""+getSequence());
insPs.setString(2,"TEST_CODE");
insPs.setString(3,"TT Insert test");
insPs.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
insPs.execute();
if ( (i % COMMIT_INTVL) == 0 )
connection.commit();
connection.commit();
And lastly, the fastest way of all is to use JDBC batch operations; see the JDBC documentation about batch operations. That will improve insert performance still more.
Lastly, a word of caution. Although you will probably be able to easily achieve more than 5000 inserts per second into TimesTen, TimesTen may not be able to push the data to oracle at this rate. the rate of push to Oracle is likely to be significantly slower. Thus if you are executing a continuous high volume insert workload into TimesTen two things will happen; (a) the datastore will become fiull and unable to accepot any more inserts until you explicitly remove some data and (b) a backlog will build up (in the TT transaction logs on disk) of data waiting to be pushed to Oracle.
This kind of setup is not really suited to support sustained high insert levels; you need to look at the maximum that can be sustained for the whole application -> TimesTen -> Oracle pathway. Of course, if the workload is 'bursty' then this may not be an issue at all.
Chris

Similar Messages

  • Aggregate query on global cache group table

    Hi,
    I set up two global cache nodes. As we know, global cache group is dynamic.
    The cache group can be dynamically loaded by primary key or foreign key as my understanding.
    There are three records in oracle cache table, and one record is loaded in node A, and the other two records in node B.
    Oracle:
    1 Java
    2 C
    3 Python
    Node A:
    1 Java
    Node B:
    2 C
    3 Python
    If I select count(*) in Node A or Node B, the result respectively is 1 and 2.
    The questions are:
    how I can get the real count 3?
    Is it reasonable to do this query on global cache group table?
    I have one idea that create another read-only node for aggregation query, but it seems weird.
    Thanks very much.
    Regards,
    Nesta
    Edited by: user12240056 on Dec 2, 2009 12:54 AM

    Do you mean something like
    UPDATE sometable SET somecol = somevalue;
    where you are updating all rows (or where you may use a WHERE clause that matches many rows and is not an equality)?
    This is not something you can do in one step with a GLOBAL DYNAMIC cache group. If the number of rows that would be affected is small and you know the keys or every row that must be updated then you could simply execute multiple individual updates. If the number of rows is large or you do not know all the ketys in advance then maybe you would adopt the approach of ensuring that all relevant rows are in the local cache grid node already via LOAD CACHE GROUP ... WHERE ... Alternatively, if you do not need Grid functionality you could consider using a single cache with a non-dynamic (explicitly loaded) cache group and just pre-load all the data.
    I would not try and use JTA to update rows in multiple grid nodes in one transaction; it will be slow and you would have to know which rows are located in which nodes...
    Chris

  • Can anyone explain how this works (vlans and bridge groups)

    Can someone please explain how this works...I have started to have problems but nothing changed. My problems are vlan1 and 1000 getting blocked on the switchport where the root bridge is attached.
    ROOT BRIDGE:
    ssid state
    station-role root bridge
    rts threshold 4000
    concatenation
    interface Dot11Radio0.1
    encapsulation dot1Q 1 native
    no ip route-cache
    no snmp trap link-status
    bridge-group 1
    bridge-group 1 spanning-disabled
    interface Dot11Radio0.911
    encapsulation dot1Q 911
    no ip route-cache
    no snmp trap link-status
    bridge-group 5
    interface Dot11Radio0.1000
    encapsulation dot1Q 1000
    no ip route-cache
    no snmp trap link-status
    bridge-group 2
    bridge-group 2 spanning-disabled
    interface Dot11Radio0.2001
    encapsulation dot1Q 2001
    no ip route-cache
    no snmp trap link-status
    bridge-group 253
    bridge-group 253 spanning-disabled
    interface Dot11Radio0.2120
    encapsulation dot1Q 2120
    no ip route-cache
    no snmp trap link-status
    bridge-group 7
    interface Dot11Radio0.2330
    encapsulation dot1Q 2330
    no ip route-cache
    no snmp trap link-status
    bridge-group 3
    bridge-group 3 spanning-disabled
    interface Dot11Radio0.2336
    encapsulation dot1Q 2336
    no ip route-cache
    no snmp trap link-status
    bridge-group 4
    interface Dot11Radio0.2350
    encapsulation dot1Q 2350
    no ip route-cache
    no snmp trap link-status
    bridge-group 6
    interface Dot11Radio0.2901
    encapsulation dot1Q 2901
    no ip route-cache
    no snmp trap link-status
    bridge-group 255
    bridge-group 255 spanning-disabled
    interface Dot11Radio0.2902
    encapsulation dot1Q 2902
    no ip route-cache
    no snmp trap link-status
    bridge-group 254
    bridge-group 254 spanning-disabled
    interface FastEthernet0
    no ip address
    no ip route-cache
    interface FastEthernet0.1
    encapsulation dot1Q 1
    no ip route-cache
    no snmp trap link-status
    interface FastEthernet0.911
    encapsulation dot1Q 911
    no ip route-cache
    no snmp trap link-status
    bridge-group 5
    interface FastEthernet0.1000
    encapsulation dot1Q 1000 native
    ip address 10.0.32.10 255.255.255.0
    no ip route-cache
    no snmp trap link-status
    bridge-group 1
    interface FastEthernet0.2001
    encapsulation dot1Q 2001
    no ip route-cache
    no snmp trap link-status
    bridge-group 253
    bridge-group 253 spanning-disabled
    interface FastEthernet0.2120
    encapsulation dot1Q 2120
    no ip route-cache
    no snmp trap link-status
    bridge-group 7
    interface FastEthernet0.2330
    encapsulation dot1Q 2330
    no ip route-cache
    no snmp trap link-status
    bridge-group 3
    interface FastEthernet0.2336
    encapsulation dot1Q 2336
    no ip route-cache
    no snmp trap link-status
    bridge-group 4
    interface FastEthernet0.2350
    description 81 River Rd - Labor
    encapsulation dot1Q 2350
    no ip route-cache
    no snmp trap link-status
    bridge-group 6
    interface FastEthernet0.2901
    encapsulation dot1Q 2901
    no ip route-cache
    no snmp trap link-status
    bridge-group 255
    bridge-group 255 spanning-disabled
    interface FastEthernet0.2902
    encapsulation dot1Q 2902
    no ip route-cache
    no snmp trap link-status
    bridge-group 254
    bridge-group 254 spanning-disabled
    interface BVI1
    ip address 10.0.32.10 255.255.255.0
    no ip route-cache
    ip default-gateway 10.0.32.1

    NON-ROOT BRIDGE#2:
    ssid state
    station-role non-root bridge
    rts threshold 4000
    concatenation
    infrastructure-client
    interface Dot11Radio0.1
    encapsulation dot1Q 1 native
    no ip route-cache
    bridge-group 1
    bridge-group 1 spanning-disabled
    interface Dot11Radio0.1000
    encapsulation dot1Q 1000
    no ip route-cache
    bridge-group 254
    bridge-group 254 spanning-disabled
    interface Dot11Radio0.2001
    encapsulation dot1Q 2001
    no ip route-cache
    bridge-group 252
    bridge-group 252 spanning-disabled
    interface Dot11Radio0.2336
    encapsulation dot1Q 2336
    no ip route-cache
    bridge-group 251
    bridge-group 251 spanning-disabled
    interface Dot11Radio0.2901
    encapsulation dot1Q 2901
    no ip route-cache
    bridge-group 253
    bridge-group 253 spanning-disabled
    interface Dot11Radio0.2902
    encapsulation dot1Q 2902
    no ip route-cache
    bridge-group 255
    bridge-group 255 spanning-disabled
    interface FastEthernet0
    no ip address
    no ip route-cache
    hold-queue 80 in
    interface FastEthernet0.1000
    encapsulation dot1Q 1000 native
    no ip route-cache
    bridge-group 1
    interface FastEthernet0.2001
    encapsulation dot1Q 2001
    no ip route-cache
    bridge-group 252
    bridge-group 252 spanning-disabled
    interface FastEthernet0.2336
    encapsulation dot1Q 2336
    no ip route-cache
    bridge-group 251
    bridge-group 251 spanning-disabled
    interface FastEthernet0.2901
    encapsulation dot1Q 2901
    no ip route-cache
    bridge-group 253
    bridge-group 253 spanning-disabled
    interface FastEthernet0.2902
    encapsulation dot1Q 2902
    no ip route-cache
    bridge-group 255
    bridge-group 255 spanning-disabled
    interface BVI1
    ip address 10.0.32.11 255.255.255.0
    no ip route-cache
    ip default-gateway 10.0.32.1

  • How to query data from grid cache group after created global AWT group

    It is me again.
    as I mentioned in my previous posts, I am in progress of setup IMDB grid environment, and now I am at stage of creating cache group. and I created global AWT cache group on one node(cachealone2), but I can not query this global cache group from another node(cachealone1)
    thanks Chirs and J, I have done successfully setup IMDB grid env, and have two node in this grid as below
    Command> call ttGridNodeStatus;
    < MYGRID, 1, 1, T, igs_imdb02, MYGRID_cachealone1_1, 10.214.10.176, 5001, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    < MYGRID, 2, 1, T, igsimdb01, MYGRID_cachealone2_2, 10.214.10.119, 5002, <NULL>, <NULL>, <NULL>, <NULL>, <NULL> >
    2 rows found.
    and I create group ATW cache group on cachealone2
    Command> cachegroups;
    Cache Group CACHEUSER.SUBSCRIBER_ACCOUNTS:
    Cache Group Type: Asynchronous Writethrough global (Dynamic)
    Autorefresh: No
    Aging: LRU on
    Root Table: ORATT.SUBSCRIBER
    Table Type: Propagate
    1 cache group found.
    Command> SELECT * FROM oratt.subscriber;
    0 rows found.
    however I can not query this from another node cachealone1
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber WHERE subscriberid = 1004;
    2206: Table ORATT.SUBSCRIBER not found
    The command failed.
    Command> SELECT * FROM oratt.subscriber;
    2206: Table ORATT.SUBSCRIBER not found
    this is example from Oracle docs, I an not sure where I missed for this. thanks for your help.

    Sounds like you have not created the Global AWT cache groupo in the second datastore? There is a multi-step process needed to roll out a cache grid and various things must be done on each node in the correct order. have you done that?
    Try checking out the QuickStart example here:
    http://download.oracle.com/otn_hosted_doc/timesten/1121/quickstart/index.html
    Chris

  • How to use a global variable for reading a query resultset in JDBC lookup?

    Hi Friends,
    Using JDBC lookup, I am trying to read from a table Emp1 using a user defined function. In PI 7.0, this function returns values of a single column only even if you fire a " Select * " query. I am planning to use a global variable(array) that stores individual column values of the records returned by a "select *" query. Need pointers on as to how a global variable can be declared and used to acheive the above scenario. Kindly explain with an example. Any help would be appreciated.
    Thanks,
    Amit.

    Hi Amit,
    Sounds like a good idea but then you would need an external db and update the table in a thread safe way !.
    Regarding your question as to how to work with global variable please refer https://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/1352. [original link is broken] [original link is broken] [original link is broken]
    Rgds
    joel

  • Uanble to create Cache Group from Cache Administrator

    Folks,
    I am attempting to create a cache group from the Cache Administrator.
    I have set all the data source properties and am able to login to the data source but when I attempt to create a cache group i.e. I specify the name & type of cache group, I get this message in red at the bottom saying "Gathering table information, please wait" and... that's it. Nothing happens!
    I am able to move the cursor etc. but the cache group is not defined.
    Anybody have any suggestions as to what I'm doing wrong? Any help would be appreciated!
    keshava

    You cannot have multiple root tables within one cache group. The requirements for putting tables together into one cache group are very strict; there must be one top level table (the root table) and there can optionally be multiple child tables. The child tables must be related via foreign keys either to the root table or to a child table higher in the hierarchy.
    The solution for your case is to put one ofthe root tables and the child table into one cache group and the other root table into a separate cache group. If you do that you need to take care of a few things:
    1. You cannot define any foreign keys between tables in different cache groups in TimesTen (the keys can exist in Oracle) so the application must enforce the referential integrity itself for those cases.
    2. If you load data into one cache group (using LOAD CACHE GROUP or 'load on demand') then Timesten will not automatically load the corresponding data into the other cache group (sicne it does not know about the relationship). The application will need to load the data into the other cache group explicitly.
    There are no issues regarding transactional consistency when changes are pushed to Oracle. TimesTen correctly maintains and enforces transactional consistency regardless of how tables are arranged in cache groups.
    Chris

  • More than one root table ,how to design cache group ?

    hi,
    each cache group have onle one root table , many child table ,if my relational model is :
    A(id number,name ....,primary key id)
    B(id number,.....,primary key id)
    A_B_rel (aid number,bid number,foreign key aid referenc a (id),
    foreign key bid referenc b(id))
    my select statement is "select ... from a,b,a_b_rel where ....",
    i want to cache these three table , how should i create cache group ?
    my design is three awt , Cache group A for A , Cache Group b for b, Cache group ab to a_b_rel ?
    are there other better solution ?

    As you have discovered, you cannot put all three of these tables into one cache group. For READONLY cache groups the solution is simple, put two of the tables (say A and A_B) in one cache group and the other table (B) in a different cache group and make sure that both use the same AUTOREFRESH interval.
    For your case, using AWT cache groups, the situation is a bit mnore complicated. You must cache the tables as two different cache groups as mentioned above, but you cannot define a foreign key relationship in TimesTen between tables in different cache groups. Hence you will need to add logic to your application to check and enforce the 'missing' foreignb key relationship (B + A_B in this example) to ensure that you do not inadvertently insert data that would violate the FK relationship defined in Oracle. Otherwise you could insert invalid data in TimesTen and this would then fail to propagate to Oracle.
    Chris

  • How to load cache group?

    Dear ChrisJenkins,
    My project has a timesten . There is a table (using read only cache group) in timesten.
    ex :
    create table A as (id number, content varchar(20));
    insert into A values (1, 'a1');
    insert into A values (2, 'a2');
    insert into A values (n, 'an');
    commit;
    The table (A) loaded 10 rows ('a1' --> 'a10'). if I execute the sql following :
    "Load cache group A where id >=2 and id <=11"
    , how will the timesten execute the sql above ?
    I suggest :
    the timesten don't load rows (id=2-->10) because the memory has the rows ,
    the timesten only load rows (id=11) because the memory don't has the row
    Is it true ?
    Thanks,rgds
    TuanTA

    In your example you are using a regular table not a readonly cache group table. If you are using a readonly cache group then the table would be created like this:
    CREATE READONLY CACHE GROUP CG_A
    AUTOREFRESH MODE INCREMENTAL INTERVAL 10 SECONDS STATE PAUSED
    FROM
    ORACLEOWNER.A ( ID NUMBER, CONTENT VARCHAR(20));
    This assumes that the table ORACLEOWNER.A already exists in Oracle with the same schema. The table in TimesTen will start off empty. ALso, you cannot insert, delete or update the rows in this table directly in TimesTen (that is why it is called a READONLY caceh group); if you try you will get an error. All data for this table has to originate in Oracle. Let's say that in Oracle you now do the following:
    insert into A values (1, 'a1');
    insert into A values (2, 'a2');
    insert into A values (10, 'a10');
    commit;
    Stilll the table in TimesTen is empty. We can load the table with the data from Oracle using:
    LOAD CACHE GROUP CG_A COMMIT EVERY 256 ROWS;
    Mow the table in TimesTen had the same rows as the table in Oracle. Also, the LOAD operation changes the AUTOREFRESH state from PAUSED to ON. You still cannot directly insert/update and delete to this table in TimesTen but any data changes arising due to DML executed on the Oracle table will be captured and propagates to TimesTen by the AUTOREFRESH mechanism. If you now did, in Oracle:
    UPDATE A SET CONTENT = 'NEW' WHERE ID = 3;
    INSERT INTO A VALUES (11, 'a11');
    COMMIT;
    Then, after the next autorefresh cycle (every 10 seconds in this example), the table in TimesTen would contain:
    1, 'a1'
    2, 'a2'
    3, 'NEW'
    4, 'a4'
    5, 'a5'
    6, 'a6'
    7, 'a7'
    8, 'a8'
    9, 'a9'
    10, 'a10'
    11, 'a11'
    So, your question cannot apply for READONLY cache groups...
    If you used a USERMANAGED cache group then your question could apply (as long as the cache group was not using AUTOREFRESH and the table had not been marked READONLY). In that case a LOAD CACHE GROUP cpmmand will only load qualifying rows that do not already exist in the cache table in TimesTen. If rows with the same primary key exist in Oracle they are not loaded, even if the other columns have different values to those in TimesTen. Contrast this with REFRESH CACHE GROUP which will replace all matching rows in TimesTen with the rows from Oracle.
    Chris

  • How to Insert Sharepoint 2013 Person or Group Field using VBA

    Hello,
    How to Insert Sharepoint 2013 Person or Group Field using VBA , I tried set the field with ID / ID;Name / Name
    but all failed, Please advice 
    Function SetSP_Field(F_Yer As Single, F_WekNo As Single, F_ProjectNum As String, strField As Variant, strFieldValue As Variant) As String
    ' test code
        Dim rst         As Recordset
        Dim strSQL      As String
        Dim SP_RecSet   As ADODB.Recordset
        Set SP_RecSet = New ADODB.Recordset
        SetSP_Field = False
        strSQL = "Status Report"
        On Error Resume Next
        Call SP_Connect(2)
        strSQL = "select " & SP_List & ".* From  " & SP_List _
               & " Where [Year] = " & CInt(F_Yer) _
               & "   and [WeekNo] = " & CInt(F_WekNo) _
               & "   and [Compass_Code] = '" & F_ProjectNum & "' ; " _
    '           & " Set " & strField & " = " & strFieldValue
        'Debug.Print strSQL
        'Debug.Print SP_CN
        SP_RecSet.Open strSQL, SP_CN, adOpenStatic, adLockOptimistic
        If SP_RecSet.RecordCount = 0 Then
            Debug.Print "Record for project: " & F_ProjectNum & " not found"
            LogFile.WriteLine ("Record " & strID & " not found")
        End If
        If Not SP_RecSet.EOF Then
            'SP_RecSet.Edit
            SP_RecSet.Fields(strField) = strFieldValue
            SP_RecSet.Update
            If Err.Number <> 0 Then
                MsgBox ("SP Update Error: " & Err.Description)
                GoTo Exit_Fun
            End If
            SetSP_Field = True
        End If
    Exit_Fun:
        SP_RecSet.Close
        Set SP_RecSet = Nothing
    End Function

    Hi Tim,
    Let’s verify the followings:
    Whether this issue occurred for other lists in the same site.
    Whether this issue occurred for all users who don’t have full control.
    Whether this issue occurred on other sites.
    When you created the people & group column, which show field did you used? Please try to use a different show field for the people & group column, compare the result.
    In addition, please check if the link is useful:
    http://www.learningsharepoint.com/2013/08/21/empty-value-in-people-picker-after-saving-the-list-form-in-sharepoint/
    Best Regards,
    Wendy
    Wendy Li
    TechNet Community Support

  • How many Read Only Cache Groups?

    How many Read Only Cache Groups we can createin in one DSN?
    I mean if e.g. 100 are possible?
    Thanks
    BR
    Andrzej
    Edited by: user8181100 on 2009-04-07 05:53

    As many as you like. There is no fixed upper limit.
    Chris

  • SMP 2.3.4 purge cache group not working

    Hi to all,
    We have developed a native Android App on SMP 2.3.4. The app uses ondemand cache groups and except some master data synchronizations it uses several create operations to write data to an Oracle database.
    The problem is that, from the SCC it is not possible to purge the Logically Deleted Row Count. The entries are successfully transferred from Active Row Count to Logically Deleted Row Count but purging does nothing.
    At SMP2.1.3 the same scenario works. Logically deleted rows can be removed from CDB by purging. The problem is that users send data to CDB and the to the backend by submit create operations and the data cannot be deleted from CDB even when they are marked as logically deleted.
    The only way to purge the logically deleted data, is to delete the package users from the SCC tab package users.
    Any suggestions? Any workaround? Is it safe to suggest to the customer deleting the package users and then purging the data?
    Thanks

    Hi,
    I always do a sync from the device, so the entries are transferred to logically deleted column. The problem is that purging does not have any effect. According to a note ( 1879977)
    The rows in the SUP Cache Database with the column LOGICAL_DEL set to 1 should only be removed if the Last Modified Date (LMD) for the row in question is older than the oldest synchronization time in the system for a specific Mobile Business Object (MBO). As the oldest synchronization time is not taken into consideration during the purge or during the automatic Synchronization Cache Cleanup task, the rows in question are getting deleted.
    To my understanding, the above can never be true because the apps works as following. The user enters the application, synchronizes to download all the necessary data to the device and the perform some create operations. After, the user synchronizes again in order to send the data back to the backend. So the LMD data will always be newest than the oldest sync time (if understood correctly oldest sync time is the first sync time).
    As a result, all the data stay at the CDB as logically deleted affecting the CDB size. Attaching a screenshot
    For sure at sup 2.1.3 logically deleted entries can be deleted (version before note 1879977). Is there any safe workaround in order to delete unused entries for CDB?
    One last question. Do the data created at the device side (ex data from create operations) are automatically deleted from the device local DB when they are successfully transferred to the back end?
    Thanks
    EDIT: I enabled the partition by requester and device identity at the On demand cache groups that have CREATE operations, and also I added myDB.synchronize(); at the end of all the synchronizations and now my data are somehow automatically purged after reaching the backend! For example, when sending data back the cache group automatically goes from 100 entries to 0 without purging!

  • Which kind of cache group is suitable for the intensive insertion operation

    Hi Chris,sorry for call you directly. Because you give me many good answers about my many newbile questions these days:)
    You told me that the dynamic cache group is not suitable for the intensive insertion operation
    because each INSERT to a child table has to perform an existence check against Oracle even if load the cache group into RAM manually(Please correct me if wrong).
    Here I have many log tables that they only have a primary key and no foreign references and they are basically used to reflect changes from the related main tables.
    Every insert/update/delete on the main table will insert a log record in the related logging table(No direct foreign references).
    In order to cache these log tables, I have to create a independent cache group for each one, right?
    I do not want load these logs data into RAM because my application do not use them or these logs will waste my RAM clearly.
    so here comes my question.Which kind of cache group should I use to gain the best performance with no loading them into RAM?
    As my understand,the dynamic cache group load data on demand while the regular cache group need load all the data into RAM firstly and it won't load data from oracle anymore?
    Thanks in advance
    SuoNayi

    Let me be more specific. Consider this cache group:
    CREATE DYNAMIC ASYNCHRONOUS WRITETHROUGH CACHE GROUP CG_SWT
    FROM
    TPARENT
    PPK NUMBER(8,0) NOT NULL PRIMARY KEY,
    PCOL1 VARCHAR2(100)
    TCHILD
    CPK NUMBER(6,0) NOT NULL PRIMARY KEY,
    CFK NUMBER(8,0) NOT NULL,
    CCOL1 VARCHAR2(20),
    FOREIGN KEY ( CFK ) REFERENCES TPARENT ( PPK )
    INSERTS into TPARENT will not do any existence check in Oracle. An INSERT INTO TCHILD has to verify that the corresponding parent row exists. If the parent row exists in TimesTen then no check is doen in Oracle. If the parent row does not exist in TimesTen then we have to check if it exists in Oracle and if it does we will load it into TimesTen from Oracle (along with any other child rows) before completing the INSERT in TimesTen. So in the case where the parent always exists already in TimesTen there is no overhead but on the other case there is a lot of overhead.
    If your log table is truly not related to the main table (not in TT and not in Oracle either) then they should go into separate cache groups. If each insert into the log table has a unique key and there is no possibility of duplicates then you do not need to load anything into RAM. You can start with an empty table and just insert into it (since each insert is unique). Of course, if you just keep inserting you will eventually fuill up the memory in TimesTen. So, you need a mechanism to 'purge' no longer needed rows from TimesTen (they will still exist in Oracle of course). There are really two options; investigate TimesTen auotmatic aging (see documentation) - thsi may be adeuate of the insert rate is not too high - or implement a custom purge mechanism using UNLOAD CACHE GROUP (see documentation).
    Chris

  • How does the Web AS Cache refresh work?

    Hi,
    How does the Web AS Cache refresh work?
    I search a guide for Information about this.
    Can anybody help me?
    Thanks.
    Regards.
    Stefan

    hi stefan...
    Cache is nothing but a buffer kinda where all the recently accessed data r stored.
    so sometimes even after making changes u access the page the old data/page is displayed...
    so Web AS cache works as a normal refresh deleting all the recently accessed data/pages...
    it clears or frees ur memory so tht no irrelevant data is obtained...
    for further details, refer to google search... " Web AS Cache Refresh in SAP-Xi"
    /message/514186#514186 [original link is broken] (dont go at the topic name)
    regards..
    vishal
    Message was edited by: vishal prabhakar

  • 10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE

    제품 : ORACLE SERVER
    작성날짜 : 2004-05-25
    10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE
    ===============================================
    PURPOSE
    이 자료는 Oracle 10g new feature 로 manual 하게
    buffer cache 를 flush 할 수 있는 기능에 대하여 알아보도록 한다.
    Explanation
    Oracle 10g 에서 new feature 로 소개된 내용으로 SGA 내 buffer cache 의
    모든 data 를 command 수행으로 clear 할 수 있다.
    이 작업을 위해서는 "alter system" privileges 가 있어야 한다.
    Buffer cache flush 를 위한 command 는 다음과 같다.
    주의) 이 작업은 database performance 에 영향을 줄 수 있으므로 주의하여 사용하여야 한다.
    SQL > alter system flush buffer_cache;
    Example
    x$bh 를 query 하여 buffer cache 내 존재하는 정보를 확인한다.
    x$bh view 는 buffer cache headers 정보를 확인할 수 있는 view 이다.
    우선 test 로 table 을 생성하고 insert 를 수행하고
    x$bh 에서 barfil column(Relative file number of block) 과 file# 를 조회한다.
    1) Test table 생성
    SQL> Create table Test_buffer (a number)
    2 tablespace USERS;
    Table created.
    2) Test table 에 insert
    SQL> begin
    2 for i in 1..1000
    3 loop
    4 insert into test_buffer values (i);
    5 end loop;
    6 commit;
    7 end;
    8 /
    PL/SQL procedure successfully completed.
    3) Object_id 확인
    SQL> select OBJECT_id from dba_objects
    2 where object_name='TEST_BUFFER';
    OBJECT_ID
    42817
    4) x$bh 에서 buffer cache 내에 올라와 있는 DBARFIL(file number of block) 를 조회한다.
    SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
    2 from x$bh where obj= 42817;
    TS# FILE# DBARFIL DBABLK CLASS STATE MODE_HELD J
    9 23 23 1297 8 1 0 7
    9 23 23 1298 9 1 0 7
    9 23 23 1299 4 1 0 7
    9 23 23 1300 1 1 0 7
    9 23 23 1301 1 1 0 7
    9 23 23 1302 1 1 0 7
    9 23 23 1303 1 1 0 7
    9 23 23 1304 1 1 0 7
    8 rows selected.
    5) 다음과 같이 buffer cache 를 flush 하고 위 query 를 재수행한다.
    SQL > alter system flush buffer_cache ;
    SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
    2 from x$bh where obj= 42817;
    6) x$bh 에서 state column 이 0 인지 확인한다.
    0 은 free buffer 를 의미한다. flush 이후에 state 가 0 인지 확인함으로써
    flushing 이 command 를 통해 manual 하게 수행되었음을 확인할 수 있다.
    Reference Documents
    <NOTE. 251326.1>

    I am also having the same issue. Can this be addressed or does BEA provide 'almost'
    working code for the bargin price of $80k/cpu?
    "Prashanth " <[email protected]> wrote:
    >
    Hi ALL,
    I am using wl:cache tag for caching purpose. My reqmnt is such that I
    have to
    flush the cache based on user activity.
    I have tried all the combinations, but could not achieve the desired
    result.
    Can somebody guide me on how can we flush the cache??
    TIA, Prashanth Bhat.

  • How to sync Outlook 2010 Contact Groups to my iPad and iPhone

    I am trying to figure out how to sync my Outlook 2010 contact groups to my iPad and iPhone.  Also how do you manage the groups once on the device?  Seems they can only be selected and not edited.  I can get my Good contact group and my Windows Contacts contact group to sync but not my Outlook 2010 groups.  Any ideas?

    Mac solution.  No idea if works on PC:
    Work Around:
    On your mac:
    1.  Open a new email
    2.  Insert name of an existing group into one of the "To" fields as usual.
    3.  Highlight all the resulting names in the "To" entry
         [Highlight first name, and while holding shift, highlight last name and all will highlight[]
    4. Copy (Command C) the highlighted emails
    5. Paste (Command V) nto a pages document
    6. Remove the name and anything else preceeding the first < and also the first < of only the first entry whixh will look like this:
         [email protected]>,
    7. Remove the > of the last email which will look like this:
         , Jane Doe <[email protected]
    8. Copy the entire altered list from the pages document
    9, Go to Contact Book and click + to add a new entry
    10.. Name the entry (XXX Group for iPad) or whatever you wish to call it
    11.  Paste all the copied altered list into any one of the email entry spaces
    12.  Click on "Done"
    13.  Create new email On iPad and click on + in any one of the "To" lines and search for "XXX Group for iPad"  You cannot just key in XXX Group for iPad!!!!!!!, you must use + contact search.
    14.  Enter the selected iPad contact group into the "To" line.
    15.  All the entries will now appear in the "To" line and you are good to go.
    Adding or deleting email addresses from the iPad group:
    Easiest:  Alter the original mac group in the usual way and repeat steps 1-14 above.  It seems impossible to edit within the emai line of the iPad contact entry.
    Rediculously simple once figured out.
    Happy emailling, but please send Bcc so that you don't spread your groups' email addresses all over the internet for spam, viruses, and other unwanted emails.  Be considerate and ask your recipients to notifiy you if they wish to be deleted from the lists you compose.

Maybe you are looking for