Replication databse without 'Purge'

We want to create a reporting database where no 'Purge' operation can be performed & all the records will be retained .Production DB & reporting DB are at different geographic location.In this scenario; what tool should be used for replication ? Can we achieve this using Data guard or streaming ? Can streaming be done over the WAN ? Does streaming has lot of overheads due to which performance may get hampered ?

916090 wrote:
What needs to be achieved is that if a table is purged in production database ; it shouldn't get purged in reporting database.Is this possible with Data guard ?Its not possible with Dataguard.
Standby is an image copy of primary. So its not possible.
You have to choose other replication, that you have already option of Streams .
Or Mviews you can also look it out.
HTH.

Similar Messages

  • Replication databse without 'delete'

    HI,
    We want to create a reporting database where no' delete' operation can be performed & all the records will be retained .Production DB & reporting DB are at different geographic location.In this scenario; what tool should be used for replication ? Can we achieve this using Data guard ? Can streaming be done over the WAN ? Does streaming has lot of overheads due to which performance may get hampered ?

    This forum is for Berkeley DB replication. We do not have the expertise to answer questions about Oracle Data Guard or Streams. You may want to post your questions to one or both of the following forums:
    Data Guard
    Streams
    Paula Bingham
    Oracle

  • Reclaim memory occupied by Table in recyclebin(dropped without purging )

    Hi all,
    In my oracle 11g R2 database, one table 'TEST' with 3 GB size has been dropped without purge option.
    So now its showing in recyclebin.
    Then later i issued '*PURGE RECYCLEBIN*', so that it disppeard from recyclebin view.
    But dba_segments shows the table as '[email protected]==$0' exists in the data file.
    How to remove this table from the data file , so to get back 3GB size?

    I couldn't reproduce on 11201,
    E:\Documents and Settings\aristadba>sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Apr 17 12:28:01 2012
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> conn aman/aman
    Connected.
    SQL> create table test_rbin as select * from dba_objects
      2  ;
    Table created.
    SQL> select segment_name from dba_segments where lower(segment_name) like '%test_rbin%';
    SEGMENT_NAME
    TEST_RBIN
    SQL> select segment_name, owner from dba_segments where lower(segment_name) like '%test_rbin%';
    SEGMENT_NAME
    OWNER
    TEST_RBIN
    AMAN
    SQL> drop table test_rbin;
    Table dropped.
    SQL> select segment_name, owner from dba_segments where lower(segment_name) like '%test_rbin%';
    no rows selected
    SQL> show recyclebin
    ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
    TEST_RBIN        BIN$HxpUUT4DRVec1WanHeziaw==$0 TABLE        2012-04-17:12:29:10
    SQL> select segment_name, owner from dba_segments where segment_name like '%BIN$HxpUUT4DRVec1WanHeziaw==$0%';
    SEGMENT_NAME
    OWNER
    BIN$HxpUUT4DRVec1WanHeziaw==$0
    AMAN
    SQL> purge recyclebin
      2  ;
    Recyclebin purged.
    SQL> select segment_name, owner from dba_segments where segment_name like '%BIN$HxpUUT4DRVec1WanHeziaw==$0%';
    no rows selected
    SQL>Show us a cut/paste from sql*plus terminal that its happening actually.
    HTH
    Aman....

  • Snapshot replication slow during purge of master table

    I have basic snapshot/materialized view replication of a big table (around 6 million rows).
    The problem that I run into is that when I run a purge of the master table at the master site (delete dml), the snapshot refresh time becomes slower. After the purge the snapshot refresh time goes back to the normal time interval.
    I had thought that the snapshot does a simple select so any exclusive lock on the table should not hinder the performance.
    Has anyone seen this problem before and if so what has been the workaround?
    The master site and the snapshot site both are 8.1.7.4 and are both unix tru64.
    I don't know if this has any relevence but the master database is rule based while the snapshot site is cost based.
    thanks in advance

    Hello Alan,
    Your problem is, to know inside a table-trigger if the actual DML was caused
    by a replication or a normal local DML.
    One way (I'm practising) to solve (in Oracle 8.1.7) this is the following:
    You can use in the trigger code the functions DBMS_SNAPSHOT.I_AM_A_REFRESH(),
    DBMS_REPUTIL.REPLICATION_IS_ON() and DBMS_REPUTIL.FROM_REMOTE()
    (For details see oracle documentation library)
    For example: a trigger (before insert of each row) at the master side
    on a table which is an updatable snapshot:
    DECLARE
         site_x VARCHAR2(128) := DBMS_REPUTIL.GLOBAL_NAME;
         timestamp_x      DATE;
         value_time_diff     NUMBER;
    BEGIN
    IF (NOT (DBMS_SNAPSHOT.I_AM_A_REFRESH) AND DBMS_REPUTIL.REPLICATION_IS_ON) THEN
    IF NOT DBMS_REPUTIL.FROM_REMOTE THEN
    IF inserting THEN
         :new.info_text := 'Hello table; this entry was caused by local DML';
    END IF;
    END IF;
    END IF;
    END;
    By the way: I've got here at work nearly the same configuration, now in production since a year.
    Kind regards
    Steffen Rvckel

  • Is It Possible to Add a Fileserver to a DFS Replication Group Without Connectivity to FSMO Roles Holder DC But Connectivity to Site DC???

    I apologize in advance for the rambling novella, but I tried to include as many details ahead of time as I could.
    I guess like most issues, this one's been evolving for a while, it started out with us trying to add a new member 
    to a replication group that's on a subnet without connectivity to the FSMO roles holder. I'll try to describe the 
    layout as best as I can up front.
    The AD only has one domain & both the forest & domain are at 2008R2 function level. We've got two sites defined in 
    Sites & Services, Site A is an off-site datacenter with one associated subnet & Site B with 6 associated subnets, A-F. 
    The two sites are connected by a WAN link from a cable provider. Subnets E & F at Site B have no connectivity to Site A 
    across that WAN, only what's available through the front side of the datacenter through the public Internet. The network 
    engineering group involved refuses to route that WAN traffic to those two subnets & we've got no recourse against that 
    decision; so I'm trying to find a way to accomplish this without that if possible.
    The FSMO roles holder is located at Site A. I know that I can define a Site C, add Subnets E & F to that site, & then 
    configure an SMTP site link between Sites A & C, but that only handles AD replication, correct? That still wouldn't allow me, for example, 
    to enumerate DFS namespaces from subnets E & F, or to add a fileserver on either of those subnets as a member to an existing
    DFS replication group, right? Also, root scalability is enabled on all the namespace shares.
    Is there a way to accomplish both of these things without transferring the FSMO roles from the original DC at Site A to, say, 
    the bridgehead DC at Site B? 
    When the infrastructure was originally setup by a former analyst, the topology was much more simple & everything was left
    under the Default First Site & no sites/subnets were setup until fairly recently to resolve authentication issues on 
    Subnets E & F... I bring this up just to say, the FSMO roles holder has held them throughout the build out & addition of 
    all sorts of systems & I'm honestly not sure what, if anything, the transfer of those roles will break. 
    I definitely don't claim to be an expert in any of this, I'll be the first to say that I'm a work-in-progress on this AD design stuff, 
    I'm all for R'ing the FM, but frankly I'm dragging bottom at this point in finding the right FM. I've been digging around
    on Google, forums, & TechNet for the past week or so as this has evolved, but no resolution yet. 
    On VMs & machines on subnets E & F when I go to DFS Management -> Namespace -> Add Namespaces to Display..., none show up 
    automatically & when I click Show Namespaces, after a few seconds I get "The namespaces on DOMAIN cannot be enumerated. The 
    specified domain either does not exist or could not be contacted". If I run a dfsutil /pktinfo, nothing shows except \sysvol 
    but I can access the domain-based DFS shares through Windows Explorer with the UNC path \\DOMAIN-FQDN\Share-Name then when 
    I run a dfsutil /pktinfo it shows all the shares that I've accessed so far.
    So either I'm doing something wrong, or, for some random large, multinational company, every sunbet & fileserver one wants 
    to add to a DFS Namespace has to be able to contact the FSMO roles holder? Or, are those ADs broken down with a child domain 
    for each Site & a FSMO roles holder for that child domain is located in each site?

    Hi,
    A DC in siteB should helpful. I still not see any article mentioned that a DFS client have to connect to PDC every time trying to access a DFS domain based namespace.
    Please see following article. I pasted a part of it below:
    http://technet.microsoft.com/en-us/library/cc782417(v=ws.10).aspx
    Domain controllers play numerous roles in DFS:
    Domain controllers store DFS metadata in Active Directory about domain-based namespaces. DFS metadata consists of information about entire namespace, including the root, root targets, links, link targets, and settings. By default,root servers
    that host domain-based namespaces periodically poll the domain controller acting as the primary domain controller (PDC) emulator master to obtain an updated version of the DFS metadata and store this metadata in memory.
    So Other DC needs to connect PDC for an updated metadata.
    Whenever an administrator makes a change to a domain-based namespace, the
    change is made on the domain controller acting as the PDC emulator master and is then replicated (via Active Directory replication) to other domain controllers in the domain.
    Domain Name Referral Cache
    A domain name referral contains the NetBIOS and DNS names of the local domain, all trusted domains in the forest, and domains in trusted forests. A
    DFS client requests a domain name referral from a domain controller to determine the domains in which the clients can access domain-based namespaces.
    Domain Controller Referral Cache
    A domain controller referral contains the NetBIOS and DNS names of the domain controllers for the list of domains it has cached. A DFS client requests a domain controller referral from a domain controller (in the client’s domain)
    to determine which domain controllers can provide a referral for a domain-based namespace.
    Domain-based Root Referral Cache
    The domain-based root referrals in this memory cache do not store targets in any particular order. The targets are sorted according to the target selection method only when requested from the client. Also, these referrals are based on DFS metadata stored
    on the local domain controller, not the PDC emulator master.
    Thus it seems to be acceptable to have a disconnect between sites shortly when cache is still working on siteB.
    If you have any feedback on our support, please send to [email protected].

  • Reporting database creation without purging any data

    We want to create a reporting database where no 'Purge' operation can be performed & all the records will be retained .Production DB & reporting DB are at different geographic location.In this scenario; what tool should be used for replication ? Can we achieve this using Data guard ? Can streaming be done over the WAN ? Does streaming has lot of overheads due to which performance may get hampered

    Can we achieve this using Data guard ? no
    Can streaming be done over the WAN ?sure
    Does streaming has lot of overheads due to which performance may get hamperedin a downstream scenario, all you have to do is to transport the archivelogs to other site, both capture and apply is done on the target site, no overhead on source site (however processing can be quite demanding on the target site)
    you can write your own DDL/DML handlers according to your needs

  • Material replication done without my material

    Hai Gurus
    I created material  No;21,22,23  in backend system.And carried required procedure for replicating all material.But material replication has done without my materials i.e..,21,22,23 etc
    Could u guys help me on this
    Thanks & Regards
    chandrasekhar

    Hello,
    Please check the config from this blog.
    /people/marcin.gajewski/blog/2007/02/05/how-to-replicate-material-master-from-r3-to-srm
    New materials created in ECC for a already replicated product category in SRM... transfer happens by BDoc, you can view the queues in SMQ1 & SMQ2.
    Check that there are no filters setup in R3AC1
    In SMW01, you can check the list of BDoc, for material replication BDoc type is PRODUCT_MAT, click on show BDoc message, you will see list of materials replicated via BDoc & errors if any.
    Material Replication R/3 to SRM 3.0 - material not in COMMPR01
    Hope this helps.
    Thanks
    Ashutosh

  • Can't sync without purging ??

    I have a friend who has an iPad2 and when she tried to syn it to her PC it gave her and error message saying that you need to purge the iPad before it will sync?? Ok that was new to me and couldn't find anything on the net about this. Any ideas??

    "no one ever said (Apple or Verizon) that you will need iTunes 10 to sync. "
    From the box the iphone came in and Apples website:
    "Mac system requirements
    Mac computer with USB 2.0 port
    Mac OS X v10.5.8 or later
    iTunes 10.1 or later (free download from www.itunes.com/download)
    iTunes Store account
    Internet access"
    "Do you have any solutions? "
    You must update your OS.
    "Where can I complain LOUDLY to Apple and get my voice heard?"
    The requirements are clearly printed on the box. There is nothing to be done. You can leave feedback for Apple, if you feel the compulsion at: http://www.apple.com/feedbackI
    "Is this an Apple ploy to force me to buy a new computer?? "
    No. This is technology advancing as it it constantly does. Nothing unusual or odd at all.

  • Timesten replication with multiple interfaces sharing the same hostname

    Hi,
    we have in our environment two Sun T2000 nodes, running SunOS 5.10 and hosting a TT server currently in Release 7.0.5.9.0, replicated between each other.
    I would like to have some more information on the behavior of the replication w.r.t. network reliability when using two interfaces associated to the same hostname, the one used to define the replication element.
    To make an example we have our nodes sharing this common /etc/hosts elements:
    151.98.227.5 TBMAS10df2 TBMAS10df2-10 TBMAS10df2-ttrep
    151.98.226.5 TBMAS10df2 TBMAS10df2-01 TBMAS10df2-ttrep
    151.98.227.4 TBMAS9df1 TBMAS9df1-10 TBMAS9df1-ttrep
    151.98.226.4 TBMAS9df1 TBMAS9df1-01 TBMAS9df1-ttrep
    with the following element defined for replication:
    ALTER REPLICATION REPLSCHEME
    ADD ELEMENT HDF_GNP_CDPN_1 TABLE HDF_GNP_CDPN
    CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN ConflictResTimeStamp
    REPORT TO '/sn/sps/HDF620/datamodel/tt41dataConflict.rpt'
    MASTER tt41data ON "TBMAS9df1-ttrep"
    SUBSCRIBER tt41data ON "TBMAS10df2-ttrep"
    RETURN RECEIPT BY REQUEST
    ADD ELEMENT HDF_GNP_CDPN_2 TABLE HDF_GNP_CDPN
    CHECK CONFLICTS BY ROW TIMESTAMP
    COLUMN ConflictResTimeStamp
    REPORT TO '/sn/sps/HDF620/datamodel/tt41dataConflict.rpt'
    MASTER tt41data ON "TBMAS10df2-ttrep"
    SUBSCRIBER tt41data ON "TBMAS9df1-ttrep"
    RETURN RECEIPT BY REQUEST;
    On this subject moving from 6.0.x to 7.0.x there has been some changes I would like to better understand.
    6.0.x reported in the documentation for Unix systems:
    If a host contains multiple network interfaces (with different IP addresses),
    TimesTen replication tries to connect to the IP addresses in the same order as
    returned by the gethostbyname call. It will try to connect using the first address;
    if a connection cannot be established, it tries the remaining addresses in order
    until a connection is established.
    Now On Solaris I don't know how to let gethostbyname return more than one interface (the documention notes at this point:
    If you have multiple network interface cards (NICs), be sure that “multi
    on” is specified in the /etc/host.conf file. Otherwise, gethostbyname will not
    return multiple addresses).
    But I understand this could be valid for Linux based systems not for Solaris.
    Now if I properly understand the above, how was the 6.0.x able to realize the first interface in the list (using the same -ttrep hostname) was down and use the other, if gethostbyname was reporting only a single entry ?
    Once upgraded to 7.0.x we realized the ADD ROUTE option was added to teach TT how to use different interfaces associated to the same hostname. In our environment we did not include this clause, but still the replication was working fine regardless of which interface we were bringing down.
    My both questions in the end lead to the same doubt on which is the algorithm used by TT to reach the replicated node w.r.t. entries in the /etc/hosts.
    Looking at the nodes I can see that by default both routes are being used:
    TBMAS10df2:/-# netstat -an|grep "151.98.227."
    151.98.225.104.45312 151.98.227.4.14000 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.14005 151.98.227.4.47307 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.14005 151.98.227.4.48230 1049792 0 1049800 0 ESTABLISHED
    151.98.227.5.46050 151.98.227.4.14005 1049792 0 1049800 0 ESTABLISHED
    TBMAS10df2:/-# netstat -an|grep "151.98.226."
    151.98.226.5.14000 151.98.226.4.47699 1049792 0 1049800 0 ESTABLISHED
    151.98.226.5.14005 151.98.226.4.47308 1049792 0 1049800 0 ESTABLISHED
    151.98.226.5.44949 151.98.226.4.14005 1049792 0 1049800 0 ESTABLISHED
    Tried to trace with ttTraceMon but once I brought down one of the interfaces did not see any reaction on either node, if you have some info it would be really appreciated !
    Cheers,
    Mike

    Hi Chris,
    Thanks for the reply, I have few more queries on this.
    1.Using the ROUTE CLAUSE we can use multiple IPs using priority level set, so that if highest priority level set in thr ROUTE clause for the IP is not active it will fall back to the next level priority 2 set IP. But cant we use ROUTE clause to use the multiple route IPs for replication simultaneously?
    2. can we execute multiple schema for the same DSN and replication scheme but with different replication route IPs?
    for example:
    At present on my system, I have a replication scheme running for a specific DSN with stand alone Master-Subscriber mechanism, with a specific route IP through VLAN-xxx for replication.
    Now I want to create and start another replication scheme for the same DSN and replication mechanism with a different VLAN-yyy route IP to be used for replication in parallel to the existing replication scheme. without making any changes to the pre-existing replication scheme.
    for the above scenarios, will there be any specific changes respective to the different replication schema mechanism ie., Active Standby and Standalone Master Subscriber mechanism etc.,
    If so what are the steps. like how we need to change the existing schema?
    Thanks In advance.
    Naveen

  • Snapshot agent showing red mark in replication monitor!!!(Merge replication)

    few days before snapshot agent showing red mark in replication monitor without any changes...
    synchronizing from publisher to subscribe working fine...
    and how to clean up snapshot agent job history...
    plz give me a solution...
    Thanks in advance.......

    error:
    The replication agent has not logged a progress message in 5 minutes. This might indicate an unresponsive agent or high system activity. Verify that records are being replicated to the destination and that connections to the Subscriber, Publisher, and Distributor
    are still active.
    snapshot agent showing red mark...
    Hi Baraiya Kirit,
    According to the error message, the issue could be caused by that the agent is busy or the agent cannot log in to one of the computers in the topology.
    To troubleshoot this issue, please perform the following steps. For more details, please review this article:
    MSSQL_ENG020554.
    • If the agent has stopped, please restart it and check if the error still occurs.
    • If this error is raised frequently because the agent is busy:
    You might need to redesign your application so that the agent spends less time processing.
    You can increase the interval at which agent status is checked using the Job Properties dialog box.
    • If an agent cannot log in to one of the computers in the topology:
    We recommend that the -LoginTimeOut value be set lower than the interval at which the replication agent checkup job runs.
    There is also a similar blog for your reference.
    http://maginaumova.com/the-replication-agent-has-not-logged-a-progress-message-in-10-minutes/
    Thanks,
    Lydia Zhang

  • Replication hub update problem with fractional replication

    I have a deployment with two DSEE 6.3 instances set in traditional MMR mode (ie. default replication settings, no fractional replication or such like non default replication configuration), on two different (RHEL 4.x) hosts.
    Moreover, I have a third directory instance on a third host (RHEL 5.X) running ODSEE 11gR1. One of the DSEE 6 master is set for fractional replication,
    all attributes but userpassword and another user-defined attribute are replicated. Due to fractional replication limitations, the ODSEE instance is configured
    as a hub, since I want to be able to manually populate and update both "unreplicated" attributes on the ODSEE instance, without having those updates to be propagated to the DSEE 6 instances.
    The problem is that in spite I try to ldapmodify the ODSEE instance with the default replication account (cn=replication manager,cn=replication,cn=config),
    I'm returned an LDAP 10 (LDAP referral) error.
    As a workaround, I found that adding a "full rights access for replication manager" aci, and then modifying the nsslapd-state attribute of the ODSEE replica before
    a manual update works fine but looks complicated.
    I used to be able to update a DSEE 6 (or previous versions) non master instance thanks to the replication account without problem so I'd like to know what's
    the best practice with ODSEE, to do what I want to do, since using the replication account seems "complicated" and not recommanded anyway, according to the admin guide.

    Ok, thanks for clarifying.
    So I would check two things:
    1) Can pam_unix and Google authenticate against the Directory if the password is stored in clear?
    2) If the password is stored in clear on a master, and the password storage scheme is SHA or crypt on a remote replica, does the password replicate to the remote replica in the correct hash?
    If (1), you could consider setting the storage scheme to clear. It's less secure, but it might be a workable solution to a bad situation.
    If (2), you could consider expanding your topology by setting the password storage scheme to clear on your masters and allocating distinct replicas for Google and UNIX, each with the appropriate password storage scheme.
    If this all seems suboptimal to you, I agree wholeheartedly. This is not a use case that the Directory deals with well, in part because applications are not supposed to authenticate by retrieving a password string and doing an out-of-band comparison. When applications do this, they bypass the standard LDAP BIND process, i.e., the protocol mechanism that standards-based LDAP password policies use for all their policy enforcement and notification. In general, if an application doesn't BIND to authenticate, it forgoes one of the major benefits of the LDAP protocol and information model.
    More to the point, there can only be one password storage scheme for a given entry, because there can only be one password policy for a given entry. Applications that impose a requirement on the specific storage scheme are bad, as are applications that impose a specific requirement on the DIT, because, like the DIT, there can only be one storage scheme. One application that imposes such requirements can be tolerated of course, but once you have two such applications, each with conflicting requirements, you are in trouble.

  • Data Replication Basics missing

    Hi Experts,
    Im on MDG 6.1.
    when i go through the blogs and forums i understood that data replication can be configured by
    Step 1)
    Perform ALE settings.
    step 2)
    in DRFIMG
    - define o/b implementation
    -define replication model (attach the o/b implementation here)
    step 3)
    in MDGIMG
    -when you define business activity you mention the o/b implementation that you created above under "BOType" column.
    is this all or did i miss anything else?
    i have read in a forum that you also need to mention in BRF+ ->NON user agent step->Pattern 04 -> Data Replication.
    does this mean if i dont mention this step in BRF+ the replication will not work??
    by doing all the above does a create and a change will be replicated as and when they are occured?
    Are these the same steps to execute replication for all reuse objects or do they differ?
    Please help.
    Regards
    Eva

    While configuring Data replication, Can i know in what cases we should use BRF+ -> Pattern 04 -> Data Replication(Non User Agent Step) and in what cases we should not use this???
    From the previous post i understood that Data replication works "without" a non user agent step in BRF+ (Pattern 04 -> Data Replication).
    i.e Approvals->Activation->Complete ->Triggers Data replication automatically??
    Please, Can any expert answer my two questions above.
    Regards
    Eva

  • Wrong replication of partner "last name" from CRM to ECC.

    Hi everyone,
    In the specific environment of our company, we create business partners in CRM using t-code 'BP'  they are then instantly replicated to ECC (we visualize them using t-code 'XD03').
    When we create a partner as a person, the fields 'first name' and 'last name' (which are 2 distinct zones ) are concatenated into one single zone when replicated into ECC. I have visualized BDOCS using t-code SMW01, and haven't seen anything abnormal.
    I have been trying to debug the replication process, without any success.
    Please any clues on what I should do to get the problem solved would be very much appreciated.
    Many thanks
    C.K.

    Hi Rashi,
    Sorry for the late reply.
    Goto ECC system and find the FM "BAPI_SALESDOCU_PROXY_UPLOAD" and set the break point using your RFC user id (RFC user id from CRM to ECC and RFC user id should have the debug rights).
    Once you create the order in CRM and it will come to debug in ECC.
    Check TI_EXTENSION_IN tables whether your values are coming in.
    If not your CRM Middleware BADI coding was not done correctly.
    Hope this would help you.
    Regards,
    Bala

  • Creating database without running the catalog.sql and  catproc.sql script

    Hi
    I created tow databse without running catalog.sql and catproc.sql. Any one knows what is the drawback of not running them?
    BR.

    I created tow databse without running catalog.sql and catproc.sql. Any one knows what is the drawback of not running them?Functionality & capabilities will be severely limited.

  • SMP 2.3.4 purge cache group not working

    Hi to all,
    We have developed a native Android App on SMP 2.3.4. The app uses ondemand cache groups and except some master data synchronizations it uses several create operations to write data to an Oracle database.
    The problem is that, from the SCC it is not possible to purge the Logically Deleted Row Count. The entries are successfully transferred from Active Row Count to Logically Deleted Row Count but purging does nothing.
    At SMP2.1.3 the same scenario works. Logically deleted rows can be removed from CDB by purging. The problem is that users send data to CDB and the to the backend by submit create operations and the data cannot be deleted from CDB even when they are marked as logically deleted.
    The only way to purge the logically deleted data, is to delete the package users from the SCC tab package users.
    Any suggestions? Any workaround? Is it safe to suggest to the customer deleting the package users and then purging the data?
    Thanks

    Hi,
    I always do a sync from the device, so the entries are transferred to logically deleted column. The problem is that purging does not have any effect. According to a note ( 1879977)
    The rows in the SUP Cache Database with the column LOGICAL_DEL set to 1 should only be removed if the Last Modified Date (LMD) for the row in question is older than the oldest synchronization time in the system for a specific Mobile Business Object (MBO). As the oldest synchronization time is not taken into consideration during the purge or during the automatic Synchronization Cache Cleanup task, the rows in question are getting deleted.
    To my understanding, the above can never be true because the apps works as following. The user enters the application, synchronizes to download all the necessary data to the device and the perform some create operations. After, the user synchronizes again in order to send the data back to the backend. So the LMD data will always be newest than the oldest sync time (if understood correctly oldest sync time is the first sync time).
    As a result, all the data stay at the CDB as logically deleted affecting the CDB size. Attaching a screenshot
    For sure at sup 2.1.3 logically deleted entries can be deleted (version before note 1879977). Is there any safe workaround in order to delete unused entries for CDB?
    One last question. Do the data created at the device side (ex data from create operations) are automatically deleted from the device local DB when they are successfully transferred to the back end?
    Thanks
    EDIT: I enabled the partition by requester and device identity at the On demand cache groups that have CREATE operations, and also I added myDB.synchronize(); at the end of all the synchronizations and now my data are somehow automatically purged after reaching the backend! For example, when sending data back the cache group automatically goes from 100 entries to 0 without purging!

Maybe you are looking for

  • Fw800 daisy chain problems - crashing MBP

    I just had two massive crashes on a 6-month old MCP with new external FW800 drives daisy-chained.  The first crash happened when I tried to hook and "old" (5 years) FW400 hard drive within the daisy-chain of newer FW800 hard drives.  The second happe

  • My blackberry keeps on freezing on the restart page

    hii my blackberry scroller stopped working; so i took the battery out and out it back in. it now keeps on freeziing whilst restarting the bar only gets up to about 3/4 of the white bar. i have contionusly taken the battery out and put it back up aswe

  • WebIntelligence - Refresh taking huge time

    Hi Team, I have an issue in BI launchpad. Created WEBI report using BEx as a source and enabled Refresh on open option too. The issue is when the user open the report, its taking lot of time for displaying the prompt screen(Approx 30 mins). And we ha

  • Transfer videos to desktop

    Unable to view videos taken with Playbook on personal computer????   They will download as do the photos however the computer does not recognize the video and will not display them.  Help, would like to transfer my kids soccer games before I delete t

  • Help - Need to permanently disable the warning without user interactions.

    Hi Adobe Experts, I get this warning all the time when i open pdf documents from a later version of acrobat. I understand i can click ok and proceed or tick the box and then click ok to proceed. However, i dont want to do this for all users. is there