Manual Replication

Hi guys,
I have 2 servers (with Oracle Database 10.2.0.3.0 Standard Edition), one production and one of contingency, it is now possible to perform a cold backup on the night of the production base and take it to server contingency. Now go I came up copying the archived logs to the contingency server every half hour. With this schema if exist a contingency, we shouldn´t miss more of 30 minutes of information.
I was doing some testing, but I realize that when I open the contingency database and is in a consistent state (this is due to be restored from a cold backup) and I made a gap in mind, I find no way to apply all archived logs generated after the cold backup (ie would have to make an incomplete recovery), of the contingency basis.
So the question would be:
Is it possible to apply archived logs to a database which is in consistent state? or my schema is not quite right?.
Thanks in advanced and Merry Christmas for all.

The cost of doing this manually, plus all of the associated inevitable errors, isn't worth the trouble.
Contact DBVISIT (http://www.dbvisit.com) and take a serious look at their product.

Similar Messages

  • ERP System ID is not appearing for Manual Replication upload in Properties

    Customer are uploading data for manual replication they are unable to select the ERP as the source system ID since it is not appearing in the drop down menu.

    Hi,
    prerequisite for selectiong a source system is that a system including a system instance is defined. If it is a SAP business suite system it must be defined as such. If not you need to select a non-SAP system in the properties of the uploaded file.
    Best regards, Reinhard

  • Manual replication after initial replication

    Hi,
    I have a Hyper-V server with multiple VM's replicating over VPN to a replica Hyper-V server.  I have it setup to replicate every 5 minutes.  All is well.  One of the VM's on my Hyper-V server is a backup server.  I have backup software
    that I install on a client's server or workstation that backs up locally and then uploads the backup over the internet to my backup server [I can seed the software's initial backup manually much like the initial replication of the Hyper-V replica].
    So when I sign up a new client and setup the backup, I could have upwards of 60GB or so of data that gets written to the backup server VM.  When I do this it can take a week for the replication to finish when there has been 60GB of data added to the
    VM.  Is there a way to pause replication, and manualy export a replication to an external HD, then import the replacation to the replication server, then resume replication?  like when we do the first replication manually to a remote server? 
    Or am I going to have to remove replication [on both servers], delete the replica VM, then enable replication again and do a manual initial replication?
    Thanks for any input!!
    Rob

    Hello,
    I appreciate your input.  All of my VM's are backed up with Data Protection manager.  My question wasn't about back up.  It was how I can manually replicate after initial replication.  you said two things above.  1] Only the initial
    replication can use external HD, then 2] export the VM to replica site manually.  
    How do I accomplish "then export the VM to replica site manually" and then resume normal
    replication?
    Thanks
    Rob

  • How can I Force/relaunch BP replication from CRM to ECC

    Hi everybody,
    We had some replication problem with sold-to-party BPs when creating them in CRM (Tr Code BP). The CRM002 id was not implemented in most cases. A job was planned to relaunch the Bdocs automatically every hour. It seemed to be ok for a moment.
    However we still have some sold-to-party BPs that are not replicated at all in ECC (Master Data and sales data). The usual procedure we set to trigger the replication is checking a case in BP Tr. but when I uncheck the case and check it again, the replication doesnt happen.
    Is there a way to force manually the replication of a sold to party BP from CRM to ECC?
    Second question in replication subject : We also have partial replication in ECC. For example the Tax classification partially replicates. It's like if the replication process in crashing on the way and the result is no CRM002 id back in the BP master data in CRM. Do you have any idea of where it's coming from?
    Is there a Transaction like SMW01 (CRM) in ECC in order to trace replication problems of processes?
    Many Thanks.
    Laurent

    Hi,
       Firstly , please check if you have any filters in for adapter object customer_main in trx. R3AC1 (which prevants data replication)
    Next, check if your inbound (SMQR) or outbound queues (SMQS) of CRM are de-registered.
    You can use trnx.
    CRMM_BUPA_SEND
    to manually replication a BP to the ECC system.
    No.. there is no monitor in ECC like the Bdoc monitor (SMW01) in CRM.
    Hope this helps. Reward if helpful!
    Thanks,
    Sudipta.

  • User Password Not Replicated during ACS Replication

    I am provisioning user accounts in ACS through a provisioning system. The provisioned ACS is set to replicate user and group database to another ACS. Replication interval time is set to 15 mins.
    Problem is that even though the replication cycle runs every 15 mins, if no user is added or deleted, the pre-checks determine that outbound replication is not required and cycle is completed. Hence, if user's password change, they are not replicated to other ACS and in case the authentication request goes to the other ACS then it fails. Manual replication is fine.
    How to make sure replication is run even in case of user password change and not just when a user is added or removed.

    Hi,
    What is the acs ver ? Are the user accounts you are referring to stored? i.e. are the local to the ACS server itself, or are they defined in an external user database (e.g. Active Directory, LDAP, etc.)?
    Users defined via Active Directory are dynamically mapped to a user account in ACS and this account information is typically not replicated since the users created are dynamic and can change properties based on
    configuration/changes in Active Directory itself.
    Regards,
    Jagdeep

  • TT12039: Could not get port number of TimesTen replication agent on remote

    i use the "ttRepAdmin -duplicate -from yymhcc_active -host mt2 -setMasterRepStart -uid musicclub -pwd musicclub -remoteDaemonPort 17001 -keepCG -cacheUid musicclub -cachePwd musicclub -localhost "mt4" yymhcc_standby;"
    TT12039: Could not get port number of TimesTen replication agent on remote host. Either the replication agent was not started, or it was just started and has not communicated its port number to the TimesTen daemon
    at the mt4 machine :
    ping mt2
    PING mt2 (10.25.71.26) 56(84) bytes of data.
    64 bytes from mt2 (10.25.71.26): icmp_seq=1 ttl=64 time=0.138 ms
    64 bytes from mt2 (10.25.71.26): icmp_seq=2 ttl=64 time=0.108 ms
    at mt2
    ttAdmin -query yymhcc_active
    RAM Residence Policy : manual
    Manually Loaded In RAM : True
    Replication Agent Policy : manual
    Replication Manually Started : True
    Cache Agent Policy : manual
    Cache Agent Manually Started : True
    and
    repschemes;
    Replication Scheme Active Standby:
    Master Store: YYMHCC on MT2
    Master Store: YYMHCC_STD on MT4
    Excluded Tables:
    None
    Excluded Cache Groups:
    None
    Excluded sequences:
    None
    Store: YYMHCC on MT2
    Port: 21000
    Log Fail Threshold: (none)
    Retry Timeout: 30 seconds
    Compress Traffic: Disabled
    Store: YYMHCC_STD on MT4
    Port: 20000
    Log Fail Threshold: (none)
    Retry Timeout: 30 seconds
    Compress Traffic: Disabled
    1 replication scheme found.

    when i use: ttRepAdmin -duplicate -from yymhcc -host mt2 -setMasterRepStart -uid musicclub -pwd musicclub -remoteDaemonPort 17001 -keepCG -cacheUid musicclub -cachePwd musicclub -localhost "mt4" yymhcc_standby
    TT8179: Cannot create duplicate store : store already exists
    master odbc config :
    [yymhcc_active]
    Description=For Active Master
    Driver=/usr/local/timesten/TimesTen/tt70/lib/libtten.so
    DataStore=/usr/local/timesten/TimesTen/yymhcc/yymhcc
    DatabaseCharacterSet=ZHS16GBK
    ConnectionCharacterSet=ZHS16GBK
    Authenticate=1
    OracleID=yymhcc
    OraclePWD=mc
    UID=mc
    PWD=mc
    #ipcs memory size(M)
    PermSize=8192
    Connections=2047
    #permsize*20%
    TempSize=1024
    PassThrough=1
    WaitForConnect=0
    Isolation=0
    Logging=1
    DurableCommits=0
    CkptFrequency=600
    CkptLogVolume=256
    #LogBuffSize=256000
    LogBuffSize=524288
    LogFileSize=256
    LogFlushMethod=1
    LogPurge=1
    LockLevel=0
    LockWait=5
    SQLQueryTimeout=5
    RecoveryThreads=16
    subscriber config :
    [yymhcc_standby]
    Driver=/usr/local/timesten/TimesTen/tt70/lib/libtten.so
    #DataStore=/usr/local/timesten/TimesTen/yymhcc_std/yymhstd
    DatabaseCharacterSet=ZHS16GBK
    ConnectionCharacterSet=ZHS16GBK
    Authenticate=1
    OracleID=yymhcc
    OraclePWD=mc
    UID=mc
    PWD=mc
    PermSize=8192
    Connections=2047
    #permsize*20%
    TempSize=1024
    PassThrough=1
    WaitForConnect=0
    Isolation=0
    Logging=1
    DurableCommits=0
    CkptFrequency=600
    CkptLogVolume=256
    #LogBuffSize=256000
    LogBuffSize=524288
    LogFileSize=256
    LogFlushMethod=1

  • Error 8191 while creating replication on TimesTen 7.0.1

    Hi!
    I am trying to create a simple replication scheme, details of replication scheme are as follows:
    create replication testuser1.repscheme1
    ELEMENT e TABLE testuser1.reptest
    MASTER RepTestSpider on "138.227.229.158"
    SUBSCRIBER RepTestDev on "138.227.229.64";
    I saved the above text in an repscheme.sql file ... the table reptest is created in DataStores on both machine.
    Executed this statement
    ttIsql -f repscheme.sql RepTestSpider
    ttIsql -f repscheme.sql RepTestDev They are executed succesfully
    When I Execute
    ttAdmin -repStart RepTestSpider Enter password for 'testuser1':
    RAM Residence Policy : inUse
    Manually Loaded In Ram : False
    Replication Agent Policy : manual
    Replication Manually Started : True
    Oracle Agent Policy : manual
    Oracle Agent Manually Started : False
    ttAdmin -repStart RepTestDev Enter password for 'testuser1':
    *** [TimesTen][TimesTen 7.0.1.0.0 ODBC Driver][TimesTen]TT8191: This store (REPTESTDEV on DEV4U4EX) is not involved in a replication scheme -- file "eeProc.c" lineno 10893, procedure "RepAdmin()"
    *** ODBC Error = S1000, TimesTen Error = 8191
    Looking forward for reply.
    /Ahmad

    OutPut of ttversion
    Active (spdt01)
    ttVersionTimesTen Release 7.0.1.0.0 (64 bit Linux/x86_64) (tt70:17001) 2007-01-29T21:01:14Z
    Instance admin: spider
    Instance home directory: /usr/users/spider/TimesTen/tt70
    Daemon home directory: /usr/users/spider/TimesTen/tt70/info
    Access control enabled.
    Standby (dev4u4ex)
    ttVersionTimesTen Release 7.0.1.0.0 (64 bit Linux/x86_64) (tt70:17001) 2007-01-29T21:01:1 4Z
    Instance admin: oracle
    Instance home directory: /home/oracle/TimesTen/tt70
    Daemon home directory: /home/oracle/TimesTen/tt70/info
    Access control enabled.
    Replication scheme
    create active standby pair RepTestSpider on spdt01 , RepTestDev3 on dev4u4ex
    return receipt
    Store RepTestSpider on spdt01 port 21000 timeout 30
    Store RepTestDev3 on dev4u4ex port 20000 timeout 30;
    ttRepAdmin -dsn RepTestDev3 -duplicate -from RepTestSpider -host spdt01 -uid ttadmin -pwd somesuitablepassword -setMasterRepStart -keepCG
    Regards
    /Ahmad

  • All Site Replication connection from particular one Server

    Hi,
    I have AD structure with multiple Site. All site have one Domain Controller. But KCC automatically create
    connection from one particular server (DC) for all site DC.
    Example : I have 5 Site and 5 DC. Each DC belongs to each site. I check all DC create "automatically generated"
    connection from DC5 only. I don't want to create manual replication connection. I want kcc is generated connection from different server ( from DC2).
    It is possible ?

    Each DC belongs to a specific site?
    To have KCC make 'smart' connections, the easiest and preffered way is to define site links and define an appropriate cost to them. http://technet.microsoft.com/en-us/library/cc794882%28v=ws.10%29.aspx KCC
    will use the connection with lowest cost that is available to create a replication connection. 
    If you have multiple dc's in one site, you can make the dc that has to handle replication 'preferred bridgehead' http://technet.microsoft.com/en-us/library/cc776937(v=ws.10).aspx
    Note it is indeed best practice not to create manual replication connections, but  to have KCC handle it. That will ensure that if network downtime occurs on an important link, KCC can 'failover' to other connections while maintaining consistency.
    MCP/MCSA/MCTS/MCITP

  • Performing a manual replica of a new protected workload to an existing protection group

    I have read much about creating a manual replica of a datasource when
    creating a new protection group. Hopefully my question is simple to answer??!! If I
    modify an existing Protection Group and select manual replication, will this only apply to the new protected workload in that protection group or will it apply to all the existing protected workloads that are successfully synchronising
    also?
    I am using DPM 2012 R2.
    Thanks
    Mark

    That will only apply to newly added Datasources.
    attached a Screesnhot for a newly added DS to an existing PG with Manual creation of replica
    Seidl Michael | http://www.techguy.at |
    twitter.com/techguyat | facebook.com/techguyat

  • Cant find where to set up user Replication

    I am getting the following message when I go to USER ADMIN>USER MAPPING> MANUAL REPLICATION
    Set Up Replication
    No replication operation is possible. Check the system setting
    Where does it talk about this in the documentation
    Thanks
    JB

    See the required set up at help.sap.com
    http://help.sap.com/saphelp_nw04/helpdata/en/e5/156e84bd240145a0e2a6ea2667de20/frameset.htm

  • Active version of BW Data Source Dev Appears inactive in Q

    Hi All,
            This is urgent..needs to move to prod today but I am stuck in D still! 
    Does anyone know why a particular BW data source which is active inn Dev might appear to be inactive  in Q after the transport?
      We tried manual replication, transporting replica and reapplying the transports, recollecting the data source in both r3 and BW
      I also verified that I am collecting the active version in my transport
    Thanks,
    DB
    Points will be assigned if I can get rid of this issue!
    Edited by: Darshana on Jul 14, 2008 7:34 PM
    Edited by: Darshana on Jul 14, 2008 7:34 PM

    Hello Darshana,
    Make sure the appropriate "Conversion of Source System names after transport" are maintained in target system QA and Prod.
    RSA1 > Tools >...
    Darshana,
    Follow the above path...Go to Datawarehousing Workbench in QA tcode RSA1 and on the top you will see Tools > and follow from there...
    Hope this helps....
    Edited by: Chetanya Thanneer on Jul 14, 2008 10:15 PM

  • Logger Service keeps shutting down and restarting

    Hello,
    I have upgraded my lab UCCE call center to v9.0
    I have a simplexed environment
    Ever since, the logger service keeps shutting down and restarting.
    Recently, the node manager issued an error message and shut down the server
    I tried to check some logs but I point out the problem, there are errors in different processes.
    Attached are the logs.

    Hi,
    if the NM tries to reload the server, it indicates a Problem (capital inteded).
    Indeed, there's something interesting going on:
    08:29:11:822 la-rpl Trace: Starting Recovery Key for Admin table is 6717118506000.0 
    08:29:11:822 la-rpl Trace: The largestkey = 7201232308043.0 >= startkey = 6717118506000.0  
    08:29:11:823 la-rpl Trace: To correct this problem: Stop logger service. Use ICMDBA tool to sync configuration data from its partner logger database. Restart logger service. 
    08:29:11:823 la-rpl Fail: Assertion failed: largestkey < startkey.  File: ICRDB.CPP.  Line 742
    08:29:11:860 la-rpl Trace: CExceptionHandlerEx::GenerateMiniDump -- A Mini Dump File is available at logfiles\replication.exe_20130523082911824.mdmp 
    08:29:12:074 la-rpl Unhandled Exception: Exception code: 80000003 BREAKPOINTFault address:  754A3219 01:00012219 C:\Windows\syswow64\KERNELBASE.dllRegisters:EAX:00000000EBX:00000000ECX:00001890EDX:E1043F00ESI:015F8AB0EDI:00000005CS:EIP:0023:754A3219SS:ESP:002B:003CE2D8  EBP:003CE2E0DS:002B  ES:002B  FS:0053  GS:002BFlags:00000246Call stack:Address   Frame754A3219  003CE2E0  DebugBreak+26EB1459C  003CE2EC  EMSAbortProcess+C6EB1ACD1  003CF7F8  EMSReportCommon+1A16EB1ADBB  003CF818  EMSFailMessage+2B013BBE5A  003CF8A8  ICRDb::ICRDb+44A013B2FE2  003CF9B8  main+582015D96C2  003CF9FC  NtCurrentTeb+174767333AA  003CFA08  BaseThreadInitThunk+1277449EF2  003CFA48  RtlInitializeExceptionChain+6377449EC5  003CFA60  RtlInitializeExceptionChain+36
    The short version: configuration data is corrupt.
    The longer version: the above message in red informs about the result of a sanity check. Each configuration change creates a new row in one of the tables holding the config info, and each row contains a RecoveryKey which is usually a large number incremented by the insertion. The error message says the largest key (= last key) contains a value that is lower than the first key. Naturally, this is something to consider for a lonely philosopher, but the rigid world of Cisco ICM does not allow metaphysical phenomena. Lower numbers are supposed to be lower than higher numbers.
    This, of course, raises an exception and the Logger service restarts. If there are too many restarts, the Node Manager kicks in and restarts the machine - this is just a mechanism that prevents a larger extent of data corruption.
    Now, if there's an other side Logger - fine, as the error message suggests, you can initiate manual replication (provided the other Logger database contains valid information).
    Unfortunately, as you have written, this is a side A only environment. This may mean:
    - accepting the situation, stopping ICM, throwing out the logger database, recreating it and reinstalling the Logger service,
    - poking around in various tables to check what may be saved - this may mean the beginning of an adventure.
    G.

  • Warning  6226:

    Hi Chris
    We today find there were some errores in the ttmesg.log as below, it appears some application processes cannot connect to TT.
    23:43:44.85 Info:
    : 11922: 28689/0x2abb803b3630: Disconnect /opt/TimesTen/tt1122/info/TT_1122
    23:43:44.85 Info:
    : 11922: disco.c:300: Mark in-flux (now reason 3=disconnect pid 28689 nwaiters 0 ds /opt/TimesTen/tt1122/info/TT_1122) (was reason 0)
    23:43:44.85 Info:
    : 11922: maind: done with request #1149.409659
    23:43:44.85 Info:
    : 11922: maind got #1149.409660 from 28689, disconnect complete: name=/opt/TimesTen/tt1122/info/TT_1122 context=
    2abb803b3630 success=Y panic=N
    23:43:44.85 Info:
    : 11922: 28689 0x2abb803b3630: DisconnectComplete Y /opt/TimesTen/tt1122/info/TT_1122
    23:43:44.85 Info:
    : 11922: daDbDisconnectComplete by 28689: decrementing nUsers from 24, panicked=-1, trashed=-1, shmSeq=99
    23:43:44.85 Info:
    : 11922: disco.c:616: Mark not in-flux (was reason 3=disconnect pid 28689 nwaiters 0 ds /opt/TimesTen/tt1122/info/TT_1122)
    23:43:44.85 Info:
    : 11922: maind: done with request #1149.409660
    23:43:44.86 Info:
    : 11922: maind got #1148.409661 from 28963, disconnect: name=/opt/TimesTen/tt1122/info/TT_1122 context=    
    e4839b0 dbdev= panic=N shmKey=%93%02%01b
    23:43:44.86 Info:
    : 11922: 28963/0xe4839b0: Disconnect /opt/TimesTen/tt1122/info/TT_1122
    23:43:44.86 Info:
    : 11922: disco.c:300: Mark in-flux (now reason 3=disconnect pid 28963 nwaiters 0 ds /opt/TimesTen/tt1122/info/TT_1122) (was reason 0)
    23:43:44.86 Info:
    : 11922: maind: done with request #1148.409661
    23:43:44.86 Info:
    : 11922: maind got #1148.409662 from 28963, disconnect complete: name=/opt/TimesTen/tt1122/info/TT_1122 context=    
    e4839b0 success=Y panic=N
    23:43:44.86 Info:
    : 11922: 28963 0xe4839b0: DisconnectComplete Y /opt/TimesTen/tt1122/info/TT_1122
    23:43:44.86 Info:
    : 11922: daDbDisconnectComplete by 28963: decrementing nUsers from 23, panicked=-1, trashed=-1, shmSeq=99
    23:43:44.86 Info:
    : 11922: disco.c:616: Mark not in-flux (was reason 3=disconnect pid 28963 nwaiters 0 ds /opt/TimesTen/tt1122/info/TT_1122)
    23:43:44.86 Info:
    : 11922: maind: done with request #1148.409662
    23:46:11.58 Info:
    : 11922: maind got #1074.409663 from 26664, disconnect: name=/opt/TimesTen/tt1122/info/TT_1122 context=
    2abb74729040 dbdev= panic=N shmKey=%93%02%01b
    23:46:11.58 Info:
    : 11922: 26664/0x2abb74729040: Disconnect /opt/TimesTen/tt1122/info/TT_1122
    23:46:11.58 Info:
    : 11922: disco.c:300: Mark in-flux (now reason 3=disconnect pid 26664 nwaiters 0 ds /opt/TimesTen/tt1122/info/TT_1122) (was reason 0)
    23:46:11.58 Info:
    : 11922: maind: done with request #1074.409663
    23:46:11.58 Info:
    : 11922: maind got #1074.409664 from 26664, disconnect complete: name=/opt/TimesTen/tt1122/info/TT_1122 context=
    2abb74729040 success=Y panic=N
    23:46:11.58 Info:
    : 11922: 26664 0x2abb74729040: DisconnectComplete Y /opt/TimesTen/tt1122/info/TT_1122
    23:46:11.58 Info:
    : 11922: daDbDisconnectComplete by 26664: decrementing nUsers from 22, panicked=-1, trashed=-1, shmSeq=99
    23:46:11.58 Info:
    : 11922: disco.c:616: Mark not in-flux (was reason 3=disconnect pid 26664 nwaiters 0 ds /opt/TimesTen/tt1122/info/TT_1122)
    23:46:11.58 Info:
    : 11922: maind: done with request #1074.409664
    Then, I monitor the TT to see if the configuration is OK somewhere.
    Command> monitor
      TIME_OF_1ST_CONNECT:         Thu Sep 11 11:04:40 2014
      DS_CONNECTS:                 228
      DS_DISCONNECTS:              112
      DS_CHECKPOINTS:              12
      DS_CHECKPOINTS_FUZZY:        12
      DS_COMPACTS:                 1
      PERM_ALLOCATED_SIZE:         61440000
      PERM_IN_USE_SIZE:            24455567
      PERM_IN_USE_HIGH_WATER:      24455567
      TEMP_ALLOCATED_SIZE:         8192000
      TEMP_IN_USE_SIZE:            631519
      TEMP_IN_USE_HIGH_WATER:      8168072
      SYS18:                       0
      TPL_FETCHES:                 0
      TPL_EXECS:                   0
      CACHE_HITS:                  0
      PASSTHROUGH_COUNT:           0
      XACT_BEGINS:                 6182251
      XACT_COMMITS:                6182143
      XACT_D_COMMITS:              0
      XACT_ROLLBACKS:              63
      LOG_FORCES:                  834
      DEADLOCKS:                   0
      LOCK_TIMEOUTS:               10
      LOCK_GRANTS_IMMED:           212188763
      LOCK_GRANTS_WAIT:            23017
      SYS19:                       2690
      CMD_PREPARES:                3199
      CMD_REPREPARES:              0
      CMD_TEMP_INDEXES:            0
      LAST_LOG_FILE:               105094
      REPHOLD_LOG_FILE:            -1
      REPHOLD_LOG_OFF:             -1
      REP_XACT_COUNT:              0
      REP_CONFLICT_COUNT:          0
      REP_PEER_CONNECTIONS:        0
      REP_PEER_RETRIES:            0
      FIRST_LOG_FILE:              104948
      LOG_BYTES_TO_LOG_BUFFER:     46144506088
      LOG_FS_READS:                9131652
      LOG_FS_WRITES:               89235
      LOG_BUFFER_WAITS:            206
      CHECKPOINT_BYTES_WRITTEN:    6577737728
      CURSOR_OPENS:                2219597
      CURSOR_CLOSES:               2219549
      SYS3:                        0
      SYS4:                        0
      SYS5:                        0
      SYS6:                        0
      CHECKPOINT_BLOCKS_WRITTEN:   3450959
      CHECKPOINT_WRITES:           1555796
      REQUIRED_RECOVERY:           1
      SYS11:                       0
      SYS12:                       1
      TYPE_MODE:                   0
      SYS13:                       0
      SYS14:                       0
      SYS15:                       0
      SYS16:                       0
      SYS17:                       0
      SYS9:                       
    We can see LOG_BUFFER_WAITS=206, I assume this would be the reason why application process disconnection happened. So I start to changed the LogFileSize and LogBuffMB in sys.odbc.ini by following steps:
    1. disconnect application
    2. ttdaemonadmin stop tt_1122
    3. add LogFileSize=1024,LogBufMB=1024 in sys.odbc.ini
    4. ttdaemonadmin start tt_1122
    when issuing ttisql tt_1122, I meet Warning  6226:
    [timesten@policies ~]$ ttisql -version
    TimesTen Release 11.2.2.6.0
    [timesten@policies ~]$ ttisql tt_1122
    Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    connect "DSN=tt_1122";
    Warning  6226: Ignoring value requested for first connection attribute 'LogFileSize' -- value currently in use: 64, requested value: 1024
    Warning  6226: Ignoring value requested for first connection attribute 'LogBuffMB' -- value currently in use: 65536, requested value: 1048576
    Connection successful: DSN=TT_1122;UID=timesten;DataStore=/opt/TimesTen/tt1122/info/TT_1122;DatabaseCharacterSet=ZHS16GBK;ConnectionCharacterSet=ZHS16GBK;LogFileSize=1024;DRIVER=/opt/TimesTen/tt1122/lib/libtten.so;PermSize=60000;TempSize=8000;Connections=300;TypeMode=0;OracleNetServiceName=vedb;LogBufMB=1024;
    (Default setting AutoCommit=1)
    I am not sure if LOG_BUFFER_WAITS is the cause leading to disconnetion and how to set LogFileSize and LogBuffMB correctly.
    Can you please give some advice.
    Thanks
    Li

    Thanks Chirs.
    The RAM Residence Policy here is always,so before stopping the TimesTen main daemon, I have to first unload the database from memory.
    [timesten@policies ~]$ ttadmin  -query tt_1122
    RAM Residence Policy            : always
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    As an aside, the messages in ttmesg.log below are normal as you said. I found some guys once posted same kind of errors to you and you said this is because "The client process exited without first cleaning up database resources and properly disconnecting", it also applies to this case? We conncet JAVA application to TT and normally close the connection by killing spid.
    09:47:42.82 Info:
    : 24676: maind got #27.3440 from 31774, disconnect: name=/opt/TimesTen/tt1122/info/TT_1122 context=
    2abb7408b820 dbdev= panic=N shmKey=%93%02%01v
    09:47:42.82 Info:
    : 24676: 31774/0x2abb7408b820: Disconnect /opt/TimesTen/tt1122/info/TT_1122
    09:47:42.82 Info:
    : 24676: disco.c:300: Mark in-flux (now reason 3=disconnect pid 31774 nwaiters 0 ds /opt/TimesTen/tt1122/info/TT_1122) (was reason 0)
    09:47:42.82 Info:
    : 24676: maind: done with request #27.3440
    09:47:42.82 Info:
    : 24676: maind got #27.3441 from 31774, disconnect complete: name=/opt/TimesTen/tt1122/info/TT_1122 context=
    2abb7408b820 success=Y panic=N
    09:47:42.82 Info:
    : 24676: 31774 0x2abb7408b820: DisconnectComplete Y /opt/TimesTen/tt1122/info/TT_1122
    09:47:42.82 Info:
    : 24676: daDbDisconnectComplete by 31774: decrementing nUsers from 35, panicked=-1, trashed=-1, shmSeq=119
    09:47:42.82 Info:
    : 24676: disco.c:616: Mark not in-flux (was reason 3=disconnect pid 31774 nwaiters 0 ds /opt/TimesTen/tt1122/info/TT_1122)
    09:47:42.82 Info:
    : 24676: maind: done with request #27.3441

  • Warning  6226: Ignoring value requested for first connection attribute 'TempSize'

    Hi
    Was trying to load, here is what I see. Appreciate your help.
    con1: Command> load cache group g_sdata_awt commit every 1000 rows;
    5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<3407>, error_message: [TimesTen]TT3407: Cannot allocate space to store ownership information for global cache groups because temporary data partition free space is below the minimum threshold of 3000000 bytes - from grid member DDGRID_DD_TTDB_1
    The command failed.
    con1: Command>
    [timesten@timesten101 info]$ ttisql -connstr "dsn=DD_TTDB;tempsize=4096";
    Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    connect "dsn=DD_TTDB;tempsize=4096";
    Warning  6226: Ignoring value requested for first connection attribute 'TempSize' -- value currently in use: 32, requested value: 4096
    Connection successful: DSN=DD_TTDB;UID=timesten;DataStore=/home/timesten/TimesTen/ttdev/DD_TTDB;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=/home/timesten/TimesTen/ttdev/lib/libtten.so;LogDir=/home/timesten/TimesTen/ttdev/logs/DD_TTDB;PermSize=8196;TempSize=4096;TypeMode=0;OracleNetServiceName=VIS1;
    (Default setting AutoCommit=1)
    Command>
    I tried :
    HOWTO : Understand Modifying TimesTen PermSize And TempSize First Connect Database Attributes (Doc ID 1081008.1)
    It did not help! Thanks a lot.
    Anil

    Hi Chris, Thanks a lot again! I guess I followed the steps.
    Sorry in a rush to get this done, not able to read all the docs.
    Still getting the error, can you please point to me where I am going wrong:
    Sorry for being childish pest here, tried it hard but couldnt get too far.
    [timesten@timesten101 ~]$ ttadmin -rampolicy manual DD_ttdb
    RAM Residence Policy            : manual
    Manually Loaded In RAM          : False
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    [timesten@timesten101 ~]$ ttdaemonadmin -stop;
    TimesTen Daemon stopped.
    [timesten@timesten101 ~]$ ttdaemonadmin -start;
    TimesTen Daemon startup OK.
    [timesten@timesten101 ~]$ ttstatus;
    TimesTen status report as of Mon Feb 10 08:47:28 2014
    Daemon pid 19519 port 53396 instance ttdev
    TimesTen server pid 19528 started on port 53397
    Data store /home/timesten/TimesTen/ttdev/DD_TTDB
    There are no connections to the data store
    RAM residence policy: Manual
    Data store is manually unloaded from RAM
    Replication policy  : Manual
    Cache Agent policy  : Manual
    PL/SQL enabled.
    Accessible by group ttadmin
    End of report
    [timesten@timesten101 ~]$
    [timesten@timesten101 ~]$ ttadmin -rampolicy inuse DD_ttdb
    RAM Residence Policy            : inUse
    Replication Agent Policy        : manual
    Replication Manually Started    : False
    Cache Agent Policy              : manual
    Cache Agent Manually Started    : False
    [timesten@timesten101 ~]$ ttstatus
    TimesTen status report as of Mon Feb 10 08:47:59 2014
    Daemon pid 19519 port 53396 instance ttdev
    TimesTen server pid 19528 started on port 53397
    Data store /home/timesten/TimesTen/ttdev/DD_TTDB
    There are no connections to the data store
    Replication policy  : Manual
    Cache Agent policy  : Manual
    PL/SQL enabled.
    Accessible by group ttadmin
    End of report
    [timesten@timesten101 ~]$ ttisql -connstr "dsn=DD_TTDB;uid=cacheadm;pwd=cacheadm;oraclepwd=cacheadm";
    Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved.
    Type ? or "help" for help, type "exit" to quit ttIsql.
    connect "dsn=DD_TTDB;uid=cacheadm;pwd=cacheadm;oraclepwd=cacheadm";
    15019: Only the instance admin may alter the TempSize attribute
    The command failed.
    Done.
    [timesten@timesten101 ~]$

  • MDG-F: Error when opening the Change Request

    Hi MDG Experts
    In MDG-Finance, when i'm opening the Change Request (CR Type: 0G_S002), to "finalize processing", i get this error message: "Object ID 060822EA65231EE3A7DB2A05A0AB8E61 is not valid". As a result the single processing option does not pull the master record that was added in the Create Change Request option for submitting for approval.
    I'm not sure what this error is and how to resolve it.
    Seeking your help.
    Thanks in advance.
    Regards
    Neelesh

    Hi Kiran
    Thanks for your prompt response. The entity search option that you mentions works successfully.
    However, in case of manual replication, i'm still facing a challenge.
    After executing the Manual replication option, the data isn't getting replicated to back-end ECC. Infact, in DRFLOG, the log is all green for "Process Outbound implementation GL Master Idoc (1012)".
    Am i missing something here ?
    Regards
    Neelesh

Maybe you are looking for