GoldenGate Replication - Between Schemas On Same Host

Guys - My requirement is fairly simple.I have two schemas, GG [Source] and GGR [Target] on the same host. Have one table called GG.SYNC_TABLE. I am having difficulties to push data from GG to GGR
Below are the extract and replicat information
EXTRACT EXT_AP1
SETENV (ORACLE_SID=ERPA4)
RMTHOST mdeagaix825, mgrport 7809
USERID GG@ERPA4, PASSWORD goldengate1
DISCARDFILE ./dirrpt/ext_ap1_discard.rpt, append, megabytes 50
RMTTRAIL ./dirdata/sa
TABLE GG.AP_AE_HEADERS_ALL;
TABLE GG.AP_AE_LINES_ALL;
TABLE GG.AP_BANK_ACCOUNTS_ALL;
TABLE GG.AP_BANK_BRANCHES;
TABLE GG.AP_CARDS_ALL;
TABLE GG.AP_CHECKS_ALL;
TABLE GG.AP_CREDIT_CARD_TRXNS_ALL;
TABLE GG.AP_EXPENSE_REPORTS_ALL;
TABLE GG.AP_EXPENSE_REPORT_HEADERS_ALL;
TABLE GG.AP_EXPENSE_REPORT_LINES_ALL;
TABLE GG.AP_EXPENSE_REPORT_PARAMS_ALL;
TABLE GG.AP_EXP_REPORT_DISTS_ALL;
TABLE GG.AP_HOLDS_ALL;
TABLE GG.AP_HOLD_CODES;
TABLE GG.AP_INVOICES_ALL;
TABLE GG.AP_INVOICE_DISTRIBUTIONS_ALL;
TABLE GG.AP_INVOICE_LINES_ALL;
TABLE GG.AP_INVOICE_PAYMENTS_ALL;
TABLE GG.AP_NOTES;
TABLE GG.AP_PAYMENT_HISTORY_ALL;
TABLE GG.AP_PAYMENT_HIST_DISTS;
TABLE GG.AP_PAYMENT_SCHEDULES_ALL;
TABLE GG.AP_POL_VIOLATIONS_ALL;
TABLE GG.AP_SELF_ASSESSED_TAX_DIST_ALL;
TABLE GG.AP_SUPPLIERS;
TABLE GG.AP_SUPPLIER_SITES_ALL;
TABLE GG.AP_SYSTEM_PARAMETERS_ALL;
TABLE GG.AP_TERMS_LINES;
TABLE GG.AP_TOLERANCE_TEMPLATES;
TABLE GG.SYNC_TABLE;
REPLICAT REP_AP1
SETENV (ORACLE_SID=ERPA4)
USERID GG@ERPA4, PASSWORD goldengate1
ASSUMETARGETDEFS
REPORTCOUNT EVERY 1 MINUTES, RATE
DISCARDFILE ./dirrpt/rep_ap1.dsc, PURGE
MAP GG.AP_AE_HEADERS_ALL, TARGET GGR.AP_AE_HEADERS_ALL;
MAP GG.AP_AE_LINES_ALL, TARGET GGR.AP_AE_LINES_ALL;
MAP GG.AP_BANK_ACCOUNTS_ALL, TARGET GGR.AP_BANK_ACCOUNTS_ALL;
MAP GG.AP_BANK_BRANCHES, TARGET GGR.AP_BANK_BRANCHES;
MAP GG.AP_CARDS_ALL, TARGET GGR.AP_CARDS_ALL;
MAP GG.AP_CHECKS_ALL, TARGET GGR.AP_CHECKS_ALL;
MAP GG.AP_CREDIT_CARD_TRXNS_ALL, TARGET GGR.AP_CREDIT_CARD_TRXNS_ALL;
MAP GG.AP_EXPENSE_REPORTS_ALL, TARGET GGR.AP_EXPENSE_REPORTS_ALL;
MAP GG.AP_EXPENSE_REPORT_HEADERS_ALL, TARGET GGR.AP_EXPENSE_REPORT_HEADERS_ALL;
MAP GG.AP_EXPENSE_REPORT_LINES_ALL, TARGET GGR.AP_EXPENSE_REPORT_LINES_ALL;
MAP GG.AP_EXPENSE_REPORT_PARAMS_ALL, TARGET GGR.AP_EXPENSE_REPORT_PARAMS_ALL;
MAP GG.AP_EXP_REPORT_DISTS_ALL, TARGET GGR.AP_EXP_REPORT_DISTS_ALL;
MAP GG.AP_HOLDS_ALL, TARGET GGR.AP_HOLDS_ALL;
MAP GG.AP_HOLD_CODES, TARGET GGR.AP_HOLD_CODES;
MAP GG.AP_INVOICES_ALL, TARGET GGR.AP_INVOICES_ALL;
MAP GG.AP_INVOICE_DISTRIBUTIONS_ALL, TARGET GGR.AP_INVOICE_DISTRIBUTIONS_ALL;
MAP GG.AP_INVOICE_LINES_ALL, TARGET GGR.AP_INVOICE_LINES_ALL;
MAP GG.AP_INVOICE_PAYMENTS_ALL, TARGET GGR.AP_INVOICE_PAYMENTS_ALL;
MAP GG.AP_NOTES, TARGET GGR.AP_NOTES;
MAP GG.AP_PAYMENT_HISTORY_ALL, TARGET GGR.AP_PAYMENT_HISTORY_ALL;
MAP GG.AP_PAYMENT_HIST_DISTS, TARGET GGR.AP_PAYMENT_HIST_DISTS;
MAP GG.AP_PAYMENT_SCHEDULES_ALL, TARGET GGR.AP_PAYMENT_SCHEDULES_ALL;
MAP GG.AP_POL_VIOLATIONS_ALL, TARGET GGR.AP_POL_VIOLATIONS_ALL;
MAP GG.AP_SELF_ASSESSED_TAX_DIST_ALL, TARGET GGR.AP_SELF_ASSESSED_TAX_DIST_ALL;
MAP GG.AP_SUPPLIERS, TARGET GGR.AP_SUPPLIERS;
MAP GG.AP_SUPPLIER_SITES_ALL, TARGET GGR.AP_SUPPLIER_SITES_ALL;
MAP GG.AP_SYSTEM_PARAMETERS_ALL, TARGET GGR.AP_SYSTEM_PARAMETERS_ALL;
MAP GG.AP_TERMS_LINES, TARGET GGR.AP_TERMS_LINES;
MAP GG.AP_TOLERANCE_TEMPLATES, TARGET GGR.AP_TOLERANCE_TEMPLATES;
MAP GG.SYNC_TABLE, TARGET GGR.SYNC_TABLE;
Extract, Replicat and Manager processes are running fine. But a commit on target is not propagating the data across to GGR schema. Supplemental logging is enabled. Archiving is not [I hope its not required]. What do you think I am missing here?
I am fairly new to GoldenGate, Hence if you want me to run any commands, Please provide the commands. Thanks much in advance.

Duplicate post.

Similar Messages

  • Can we implement goldengate replication between Oracle to Oracle using RMAN

    Dear All,
    I have already implemented Oracle Golden Gate Between Oracle Database 11gR2 on Linux.
    The method I have been using to copy database on Target is like this:
    select dbms_flashback.get_system_change_number() from dual;
    expdp user/password directory=backup_dir flashback_scn=1355907575 dumpfile=gg.dmp logfile=gg.log schemas=radius
    scp radius_dsl_gg.dmp [email protected]:/backup
    impdp user/password directory=backup_dir dumpfile=gg.dmp logfile=gg.log schemas=radius
    I want to replace this data pump with RMAN, because most of the time I don't have enough disk space to take data pump in production server. How can I replace it with RMAN and how can i track new records inserted in production database like we use flashback_scn in data pumps. Or is there any other better solution.
    Has anyone done this, kindly share your knowledge.
    Regards, Imran

    You don't need a dump file for DataPump, you can load directly through a database link thus eliminating the space required for the intermediate dump file.
    With RMAN you will either be looking for a point-time DB restore/recover as of specific SCN and then you can start the Replicat from that SCN. You can do a TSPITR + TTS if replicated data is a small subset of the entire database. If you can put the entire tablsapce to be read only then you can get away with only TTS as well. You configure Extract before you start any restore/recover and that's how you track outstanding changes.

  • Goldengate replication performance

    Hi ,
    This is about Goldengate replication performance.
    Have configured Goldengate replication between OLTP and Reporting  and the business peak hours occurs only for a one hour.
    at that time I can see a LAG on the replicat side of around 10-15 minutes.
    Rest of all the time there is no LAG.
    I reviewed the AWR report of the target and I could see all the replicat process are executing with a elapsed time of 0.02 or 0.01 seconds.
    However I could see a major wait event DB sequential read of 65%-71% of DB time  and it is having the maximum Waits. apart from this there are no major wait event contributing to % DB time.(21% od DB CPU which I believe is normal )
    and I can also see few queries are being hit at that peak time since it is a reporting server. and they are using select query which executes more than 15-20 minutes specially on the High transaction table.
    Can you please advise where I should look on to resolve the LAG during the Peak hours.
    I believe the select operation/wait event is causing the LAG during that peak hours. Am I correct.
    can you please advise from your experience.
    Thanks
    Surendran

    Hi Bobby,
    Thanks for your response.
    Please find my response as below,
    Environment details as below.
    1. Source and target DB - oracle database enterprise edition v 11.2.0.4
    2. Goldengate version. 12.1.2.0.0  (Same Golden-gate version installed on source and target)
    3  Classic CDC process is configured between source and target Goldengate.
    Queries and response
    Is there any long running transactions on the extract side?
    No, long running transaction is seen, I can see a huge volume of transaction getting populated  (over 0.3M records in 30 minutes)
    Target environment information
    High transaction DML activities is seen only on 3 tables.
    As the target is reporting environment I can see many sql's querying those 3 high transaction populating tables and I can see DB sequential read wait event spiking up to 65%-71%.
    I can also see in the AWR report that the GG session are executing the insert/update transaction is less than 0.01/2 sec.
    Have to set the report for every 10 min. I will update to 1 min and share the report.
    My query is : Is the select operation executed on that high transaction table on the reporting server during that high transaction window is causing the bottleneck and causes the LAG during that peak hours ?
    or Do I need to look on other area's ?
    Based on above information If you any further comments/advise  please share.
    Thanks
    Surendran.

  • TDE Wallets & Multiple Databases on same Host

    The Oracle TDE Best Practices (doc ID 130696) states this:
    Multiple databases on the same host
    If there are multiple Oracle Databases installed on the same server, they
    must access their own individual TDE wallet. Sharing the same wallet between independent instances is not supported
    and can potentially lead to the loss of encrypted data.
    If the databases share the same ORACLE_HOME, they also share the same
    sqlnet.ora file in $TNS_ADMIN . In order to access their individual wallet, the
    DIRECTORY entry for the ENCRYPTION_WALLET_LOCATION
    needs to point each database to its own wallet location:
    DIRECTORY= /etc/ORACLE/WALLETS/$ORACLE_UNQNAME
    The names of the subdirectories under /etc/ORACLE/WALLETS/ reflect
    the ORACLE_UNQNAME names of the individual databases.
    If the databases do not share the same ORACLE_HOME, they will also have their individual sqlnet.ora
    files that have to point to the individual subdirectories.
    What is the correct sqlnet.ora syntax to do this?  I currently have what is below but it doesn't seem to be correct:
    ENCRYPTION_WALLET_LOCATION =
      (SOURCE = (METHOD = FILE)
      (METHOD_DATA =
      (DIRECTORY = /local/oracle/admin/wallet/DB#1)
      (DIRECTORY = /local/oracle/admin/wallet/DB#2)

    Hi,
    You can check this :Setting ENCRYPTION_WALLET_LOCATION For Wallets Of Multiple Instances Sharing The Same Oracle Home (Doc ID 1504783.1)
    i haven't done this for multiple database, but as per Doc you can use the syntex like
    ENCRYPTION_WALLET_LOCATION =
      (SOURCE = (METHOD = FILE)
      (METHOD_DATA =
      (DIRECTORY = /local/oracle/admin/wallet/$ORACLE_UNQNAME)
    Whenever you set the Environmnet with
    export $ORACLE_UNQNAME=DB#1 
    it will choose the file from respective directory like  /local/oracle/admin/wallet/DB#1
    HTH

  • How to Create Primary DB and Physical/Logical Standby DB on the same host?

    Now I encounter a issue. I want to create one Primary DB and one Physical standby DB and one Logical standby DB on the same host.
    Create this env on the same host aims to test whether we can using EM Patching DP to apply patches on Primary/Physical/Logical DB successfully.
    I try to setup this env but fails. I want to know more related issues about create Primary DB /Physical DB/Logical DB on the same host and how to configure between them.
    Below steps is my try:
    1. Create Primary DB on the /scratch/primary_db
    2. Create Physical Db software only on the /scratch/physical_db
    3. Create Logical Db software only on the /scratch/logical_db
    4. Using EM Wizard to create physical standby database and logical standby database, and these two targets can show up on the "All Targets" Page.
    5. But when using EM Patching DP, it fails and the reason is the listener of physical and logical db cannot configured well.
    Issues:
    So I want to know about how to configure physical db and logical db's listener using EM or manually?
    If the listener name of Primary Db is LISTENER and the port is 1521,and the listener.ora is under the /scratch/primary_db/network/admin directory, then how to config physical db and logical db's listener's name and port?

    Hi,
    As this a test case then you need to create two more listener for each Oracle Home (/scratch/physical_db & /scratch/logical_db) make sure that they have different names and ports.
    Then add the new listeners manually using GC?
    Try it and let me know
    Regards
    Amin

  • Sinlge select query in different schemas for same table(Indentical Structu)

    Scenario :
    Table XYZ is created in Schema A
    After an year, the old data from the previous year would be moved to different schema. However in the other schema the same table name would be used.
    For eg
    Schema A contains table XYZ with data of 2012 yr
    Schema B contains table XYZ with data of 2011 yr
    Table XYZ in both the schemas have identical structure.
    So can we fire a single select query to read the data from both the tables in effective way.
    Eg select * from XYZ where date range between 15-Oct-2011 to 15-Mar-2012.
    However the data resides in 2 different schema altogether.

    Thanks for the reply
    Creating an view is an option.
    But my problem, there is ORM layer(either Hibernate or Eclipse Top Link) between the application and the database.
    So the queries would be formed by the ORM layer and are not hand generated.
    So i cannot use view.
    So is there any option that would allow me to use single query on different schema's ?

  • Sinlge select query in diff schemas for same table(Indentical Structure)

    Scenario :
    Table XYZ is created in Schema A
    After an year, the old data from the previous year would be moved to different schema. However in the other schema the same table name would be used.
    For eg
    Schema A contains table XYZ with data of 2012 yr
    Schema B contains table XYZ with data of 2011 yr
    Table XYZ in both the schemas have identical structure.
    So can we fire a single select query to read the data from both the tables in effective way.
    Eg select * from XYZ where date range between 15-Oct-2011 to 15-Mar-2012.
    However the data resides in 2 different schema altogether.
    Creating an view is an option.
    But my problem, there is ORM layer(either Hibernate or Eclipse Top Link) between the application and the database.
    So the queries would be formed by the ORM layer and are not hand generated.
    So i cannot use view.
    So is there any option that would allow me to use single query on different schema's ?

    Hi,
    970773 wrote:
    Scenario :
    Table XYZ is created in Schema A
    After an year, the old data from the previous year would be moved to different schema. However in the other schema the same table name would be used.
    For eg
    Schema A contains table XYZ with data of 2012 yr
    Schema B contains table XYZ with data of 2011 yr
    Table XYZ in both the schemas have identical structure.
    So can we fire a single select query to read the data from both the tables in effective way.That depends on what you mean by "effective".
    Eg select * from XYZ where date range between 15-Oct-2011 to 15-Mar-2012.
    However the data resides in 2 different schema altogether.You can do a UNION, so the data from the two years appears together. The number of actual tables may make the query slower, but it won;t change the results.
    Given that you have 2 tables, the fact that they are in different schemas doesn't matter. Just make sure the user running the query has SELECT privileges on both of them.
    Creating an view is an option.Is it? You seem to say it is not, below.
    But my problem, there is ORM layer(either Hibernate or Eclipse Top Link) between the application and the database.
    So the queries would be formed by the ORM layer and are not hand generated.
    So i cannot use view.So creating a view is not an option. Or is it?
    So is there any option that would allow me to use single query on different schema's ?Anything that you can do with a view, you can do with sub-queries. A view is merely a convenience; it just saves a sub-query, so you don't have to re-code it every time you use it. Assuming you have privilges to query the base tables, you can always avoid using a view by repeating the query that defines the view in your own query. It will not be any slower

  • Replication between Oracle Server and MS SQL Server

    Hello,
    Does anybody know of a well known or reliable software that can do data replication between Oracle Server and Microsoft SQL server.
    I suppose I can write my own version using Heterogenous Services in Oracle but I would like to know if such an automated replication between Oracle and SQL is available commercially.
    Thank you.

    Viacheslav Ostapenko wrote:
    Sorry, Aman,
    I couldn't find any info about replication to MS SQL. Is it possible at all? Could you provide link where we can read about this? It could be very interesting.Sorry Viacheslav, even I couldn't find anything for the same. I am not sure that it can be done or not, I haven't heard anyone in my contact doing so. The only place where I have seen Streams being used around me is within Oracle db only. May be someone else can help if he/she has done it.
    Aman....

  • Using the Log Viewer for two systems on same host

    We have two Portal systems installed on the same host server.  The log viewer only identifies and displays the log files from one of the systems (A).  This is the case even though I started the log viewer while logged on as the <sid>adm for system (B).
    In the configuration of the Log Viewer, I do not see an option to have it add another "system" to the local host.
    How can I configure the log viewer to display logs from both systems installed on the same host?
    Thanks,
    Bob

    Bob,
    with the port you can distinguis between the servers. both portals should have unique ports on the same server.
    for you relevant is the P4-port.
    these settings work for remote servers:
    Name: <SID>
    Host name: <hostname of server>
    Port: <P4 Port, normally 5<SysNr>04>, you can check this at: http://<host>:<port>/sap/monitoring/SystemInfo under the Dispatcher infos
    Connection: J2EE
    User: <admin user>
    Password: <admin user password>
    kr, achim

  • 9i and 10g on same host

    Hi folks,
    i am just trying to run 9i and 10g on same host (sun solaris 5.9)
    I created 2 different oracle users on os level for installation and management of those 2 instances.
    There is no problem with it, but i am just curious. would it be better to run them both with the same account?
    or is it better to have 2 separete accounts for each of them?
    just want to hear expert's opinion about this, Thanks in advance.
    Edited by: merope on May 14, 2009 2:25 PM

    >
    There is no problem with it, but i am just curious. would it be better to run them both with the same account?
    or is it better to have 2 separete accounts for each of them?It depends on Your needs. Both ways are ok. Easier is to manage if both oracle homes are under the same account, because then it's easier to upgrade database, maybe in future You will want just one oracle home and so on.
    But if for example You are running on the same machine test and dev envs, then it's somewhat normal to separete and then there is no dependency between envs and You can for example upgrade dev without touching test (that can be achieved with simple 2 different oracle_homes, but with two different users as well).

  • How to reconfigure and restart GoldenGate Replication from MySQL to Oracle after a DDL operation on the source

    I succesfully configured and performed the goldengate replication from MySQL 5.1 to Oracle XE 11g using GG 11g as per
    http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/goldengate/11g/GGS_Sect_Config_UX_MSQ_to_UX_ORA.pdf
    After replication was successully started working I added a new column to one of the test tables TCUSTMER
    ALTER TABLE TCUSTMER ADD COLUMN ADDRESS VARCHAR(64) DEFAULT NULL;
    And inserted another row in the same table.
    INSERT INTO TCUSTMER VALUES ('CLARK', 'CLARK BOATS', 'REDWOOD',  'CA', 'Test Addresss1');
    As soon as I did that the replication broke.
    2015-04-16 17:42:44  ERROR   OGG-00146  VAM function VAMRead returned unexpected result: error 600 - VAM Client Report <CAUSE OF FAILURE : Table Metadata Altered cause
    WHEN FAILED : While processing table map event in log processor
    Then I reverted the DDL change back to original
    ALTER TABLE TCUSTMER DROP COLUMN ADDRESS;
    and tried to run the replication again
    But it is not starting as before.
    What should I do to move forward with same setup.

    Hi ,
    Whenever you do a DDL change in the Source table, you have to make the same changes in the Target side also. Then recreate the Definition File, copy it to the Target side and Start the GoldenGate processes.
    1. Add the column in the Target side.
    2. Recreate the Definition File and copy it to the Target side.
    3. Recreate the Replicat process in the Target side and try to start it.
    Hope this works..!!!!!!!! Any other suggestions friends????????
    Regards,
    Veera

  • OCI Error ORA-01403: no data found in oracle goldengate replication after tts instaniation

    I recently migrated our tg core system from sun solaris(11.1.0.7) to linux (11.2.0.3) environment using goldengate method (transportable tablespace method used for instantiation).
    The initial replication worked with HANDLECOLLISIONS and after I monitor the lag finished, I took Handlecollisions off and noticed discarded report with OCI Error ORA-01403: no data found in two replications.
    I followed every step in the tts migration steps provided by oracle best practice.
    Can anybody provide any clue how do I fix this issue?
    Thank you in advance.

    extract and replicat are on schema level.
    DO I have to do anything for replication on schema level?
    Basic trandata logging enabled on source.

  • Database replication between 2ACSs (V4.0 & V4.1)

    Hi All,
    1. ACS in production in the N/W:
    Release 4.0(1) Build 44
    2.Installed recently another ACS serer for backup purposes and currently as no Data :Release 4.1(1) Build 23
    Now...
    a. Will it be possible to (auto)replicate between these two vesrions.
    b.Will some one provide steps/links to configure replication and any required configuration changes to make sure (1) is primary & (2) backup [incase primary fails)
    Thanks in advance.
    MS

    Cisco does not recommend to do replication between different versions of ACS.
    I have also seen dissimilar versions to cause problems like database corruption, though it might work in some case.
    To be on safer side keep both ACS on same version and only then replicate.
    Following link can help you configure replication:
    http://www.cisco.com/univercd/cc/td/doc/product/access/acs_soft/csacs4nt/acs41/user/scadv.htm#wp756476
    For configuring Backup ACS incase primary fails you need to configure backup server on the AAA client (router,switch) because only AAA client will forward the request to secondary server in case primary fails.
    ~Rohit

  • TCP delay on same host

    Hi there. I have two TCP applications running on the same host and one app needs to periodically send messages to the other at very short intervals. I am noticing an 80 to 100 millisecond delay from when the sender does a send() and the receiver's select() indicates the message has been received. It is very important for our application that there be as little latency as possible so 80-100ms is way too much. I cannot understand why this is so because as I said both processes are on the same host.
    Do I need to tune some TCP parameters or do something special in setting up the sockets to avoid this delay? Any help or hints would be greatly appreciated!
    [In case it helps: 1) if a message is transmitted after a gap of 1 second or greater, the receiver gets the message immediately without the latency mentioned above  2) the two processes are binding to the IP address of their host and not "localhost" or INADDR_ANY].
    Thanks a lot in advance.
    Sam.

    hi gp and thank you very much for responding to this unusual problem.
    - switch ports to the PCs are configured as portfast.
    - switch ports between two catalyst switches are not configured (default)
    - i didnt use the 'switchport access' command since they are default layer 2 interfaces. would 'switchport access vlan 1' command make any difference?
    - i looked at the port status and confirmed connection is 100 mbps full duplex.
    unusual issue is; ping, udp, multicast shows up in a very short time after I re-plug the uplink. that proves all ports are in forwarding state. only TCP shows up with delay, which doesnt occur on 200 $ unmanaged switch??
    thanks in advance for any suggestions

  • Multiple vDCs same host with no physical DC?

    I have a physical server running 2k8 R2 with Hyper-V.  Unfortunately, for what its worth, it is not able to run 2k12.  It is the only physical server running.  I would like to create a virtual domain controller, however I have no other DCs,
    physical or virtual. It would be the only domain controller.  Is there any value in having 2 vDCs on that same host, or is it pointless since they are both running on the same host, therefore providing little actual redundancy?

    If you have only one domain controller, the problem with it being in an earlier hypervisor goes away. The issue with 2008R2 as a host is that it doesn't process the VM-Generation ID attribute. If a virtual domain controller is reverted to a snapshot, it
    could cause a
    USN rollback. But, if there's only one domain controller, Active Directory cannot become inconsistent because there are no replication partners. In that case, the major fears with this go away. You still shouldn't be performing snapshots with even one virtual
    DC, but the effects are no worse than restoring from backup. You'll be fine with 2k8r2 and a single DC until such time as you can refresh your hardware.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

Maybe you are looking for