Incremental comparison through  goldengate veridata

Hi Experts
I am using Goldengate veridata for schema comparison. Its every time perform complete backup. Is there any option for incremental comparison.
and second issue that its take huge time to compare frequently change table, is there any work arround

thanks alot for your time and help !!!
First Sorry to late reply
Yes i can use SQL predicate for comparison. But i want to know Goldengate Verdiata byself don't take care the incremental policy. for ddl and dml
Suppose i have one table after first time comparison i add one column..., Suppose i dont have lastupdatedby column etc.

Similar Messages

  • Oracle GoldenGate Veridata Exception (access denied)

    Hello all,
    I have recently installed weblogic 12.1.3 and Goldengate Veridata server on a Linux box successfully, and then successfully created a domain server to administer Veridata. After opening Veridata Web console with the newly created User from Weblogic Server that has all the privileges of veridata administration, in the new connection wizard,  veridata shows following exception.
    OGGV-00184: Error message: com.goldengate.wallet.WalletException: OGGV-80002: Put credential for CONN..IRIS2 failed: access denied ("oracle.security.jps.service.credstore.CredentialAccessPermission" "context=SYSTEM,mapName=VERIDATA,keyName=CONN.IRIS2" "write").

    You have the wrong java version which i suppose is java8 which is not supported for 12c yet so downgrade to java 7. Not sure what flavour your OS is I used this How to Install Java 7 (Jdk 7u75) on CentOS/RHEL 7/6/5 and downgraded java restarted my weblogic and everything worked as expected.

  • Is goldengate veridata report store in table

    Hi Expert,
    i need veridata report in table, is veridata report store under the veridata user's tables or any another. Or is there any parameter to enable to store this report in tables.... becuase veridata user by default create 37 tables but these are not getting update with the information....

    i find a lot and finally i conclude goldengate veridata don't store the information in the tables.... its read the information from the files.....

  • PDF comparison through Javascript

    Hi,
    Is it possible to compare two PDF through Acrobat Javascript. Also i need to save the comparition report in a separate PDF.
    I know there is an option in Acrobat to get this, but i need to run this for batch of files.
    Thanks,
    Gopal

    You would need to loop over all the words in all the pages of both
    documents to compare them.  This can be done using the getPageNthWord
    method of the Document object.
    It's even possible to compare the words' locations, but that is more
    complex.

  • Scheduling Automatic Database objects comparison through Oracle CHange Mana

    HI,
    I need to compare some schemas for two databases regularly to make sure that the databases are in sync state.Could you guys pls tell me if there is any option/way through which i can do the automatic scheduling to compare the DB objects of the two databases.
    Your help would be highly appreicated.
    Many Thanks

    TOAD

  • GoldenGate and Veridata Capacity Planning/sizing

    Hello,
    Are there any capacity planning guides available for Veridata and for GoldenGate?
    What are the pertinent metrics to gather to aid in capacity planning?
    Thanks,
    Mac McDermid
    Edited by: user13291419 on Oct 17, 2012 12:55 PM

    I faced below application error after I insert new connection with correct information of golden gate connection and data source connection and click finish.
    Application Error
    javax.faces.FacesException: Error calling action method of component with id form:next
    at org.apache.myfaces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:74)
    at javax.faces.component.UICommand.broadcast(UICommand.java:106)
    at javax.faces.component.UIViewRoot._broadcastForPhase(UIViewRoot.java:90)
    at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:164)
    at org.apache.myfaces.lifecycle.LifecycleImpl.invokeApplication(LifecycleImpl.java:316)
    at org.apache.myfaces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:86)
    at javax.faces.webapp.FacesServlet.service(FacesServlet.java:106)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at com.goldengate.veridata.ui.filter.WelcomeTokenFilter.doFilter(WelcomeTokenFilter.java:61)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at org.apache.myfaces.component.html.util.ExtensionsFilter.doFilter(ExtensionsFilter.java:92)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at com.goldengate.veridata.ui.filter.SessionUserFilter.doFilter(SessionUserFilter.java:115)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at com.goldengate.veridata.ui.filter.AjaxFilter.doFilter(AjaxFilter.java:66)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at org.apache.myfaces.component.html.util.ExtensionsFilter.doFilter(ExtensionsFilter.java:122)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at com.goldengate.veridata.ui.filter.Utf8Filter.doFilter(Utf8Filter.java:28)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:563)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
    at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879)
    at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
    at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
    at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
    at java.lang.Thread.run(Unknown Source)
    Caused by: javax.faces.el.EvaluationException: Exception while invoking expression #{addConnectionWizardUI.getNextStep}
    at org.apache.myfaces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:153)
    at org.apache.myfaces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:63)
    ... 39 more
    Caused by: java.lang.NullPointerException
    at com.goldengate.veridata.entity.ConnectionComparisonFormat.(ConnectionComparisonFormat.java:13)
    at com.goldengate.veridata.entity.ConnectionDatatypeInfo.(ConnectionDatatypeInfo.java:63)
    at com.goldengate.veridata.entity.Connection.(Connection.java:84)
    at com.goldengate.veridata.dao.ConnectionDAOWebServices.findByName(ConnectionDAOWebServices.java:222)
    at com.goldengate.veridata.dao.ConnectionDAOWebServices.handleVersionControlInfo(ConnectionDAOWebServices.java:185)
    at com.goldengate.veridata.dao.ConnectionDAOWebServices.insert(ConnectionDAOWebServices.java:144)
    at com.goldengate.veridata.bu.ConnectionManagerImpl.insert(ConnectionManagerImpl.java:73)
    at com.goldengate.veridata.ui.AddConnectionWizardUI.createConnection(AddConnectionWizardUI.java:211)
    at com.goldengate.veridata.ui.AddConnectionWizardUI.getNextStep(AddConnectionWizardUI.java:120)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.apache.myfaces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:129)
    ... 40 more
    Your response is highly appreciated.
    Regards

  • 10g Physical Standby to be used for Backup through RMAN

    Dear All,
    I have 10g Database Primary and Standby Server which is operating in Maximum Performance Mode. I want to use Standby Database for performing Weekly Full and Daily Incremental Backups through RMAN. Kindly inform me is it possible, if yes how should I configure it. I will also be using Compression and Encryption while performing backups.
    Best Regards,
    Asif

    Yes, that is possible.
    Because the standby databases are in mount mode, they can be accessed through the SYS account .
    I use the following RMAN script for the same purpose. Please note that we have configure the flashback area, so we are not letting RMAN delete the archive files:
    run {
    sql 'alter system archive log current';
    sql 'alter system archive log current';
    allocate channel ch1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
    backup
    incremental level 0
    tag full_bk_db_webshop
    format 'df_%d_t%t_s%s_p%p'
    (database include current controlfile);
    backup archivelog all;
    This script is for a FULL backup ( level 0 ). You could also use the "backup full" command
    You can easily change the level of the Backup for incremental purposes.
    Starting RMAN - if configured correctly - is the same as with an open database
    export ORACLE_SID=<STDBYSID>
    rman target / rcvcat <rman/rman>@<catalogsid>
    the user for, and the catalogsid are merely examples
    Edited by: fjfranken on Mar 24, 2009 4:07 AM

  • Database :Auto increment in SQL server ID

    Hi,
    How can I do auto increment in SQL server primary column ID. Actualy I am writing the data into the SQL datbase table using LabVIEW and I wanted to set the auto increment to the primary key column ID.  I have three column in TEST_REPORT table i.e  Test_ID, Test_No and Test_Result. I wanted to set Test_ID as a primary key and set as a auto increment and write the value for only Test_No and Test_Result column
    When ever I am writing data into table the Test_ID is not getting auto incremented. I set that auto increment manualy through SQL server management studio.
    Please help me to fix this issue.

    Hi Palanivel,
    Thanks for the suggestion.
    Initially I was using same logic "query for TEST ID at initially"  but I have to query data from more than 20 tables  and write into it and its taking lots of space and query time.
    I am just looking for alternative of this. 
    Actually I am setting the auto incremnt  for TEST ID  through the SQL server managemnt studio but when ever I ignore the TEST ID column and write the data in rest column.  
    I gets the error message  "Insert Error: Column name or number of supplied values does not match table definition".  

  • Incremental and Full backups using WBADMIN and Task Scheduler in Server 2008 R2

    I'd like to create an automated rotating schedule of backups using wbadmin and task scheduler, which would backup Bare Metal Recovery; System State; Drive C: and D: to a Network Share in a pattern like this:
    Monday - Incremental, overwrite last Monday's
    Tuesday - Incremental, overwrite last Tuesday's
    Wednesday - Incremental, overwrite last Wednesday's
    Thursday - Incremental, overwrite last Thursday's
    Friday - Incremental overwrite last Friday's
    Saturday - Full, overwrite last Saturday's
    I need to use the wbadmin commands within the Task Scheduler and do not know any of the required Syntax to make sure everything goes smoothly, I do not want to do this through the CMD.

    I know each backup for the previous corresponding day will be replaced, how do you figure I wont be able to do incremental backups...
    Because incremental backup is based on Volume Shadow Copy (VSS) feature and due to Windows Server 2008 R2 limitations (this limitation is resolved in Windows 8) only one version of backed up data can be stored in a shared folder. So the
    result is that every time you back up some data on a shared folder, you actually creating a full backup of them.
    is it not supported through task scheduler?
    The Task Scheduler is only a feature that does the tasks that you have defined for it. Actually it runs the
    wbadmin command that runs on an operating system with the mentioned limitation.
    I know you can do Incremental backups through Windows Server Backup, but my limitation using that is I cant setup multiple backups.
    Yes, you are right. Windows Server Backup feature in Windows Server 2008/2008 R2 has not this functionality (although
    ntbackup in Windows XP and Windows Server 2003 had this functionality). So, the only workaround to this limitation is through using Task Scheduler feature with wbadmin command. For more information see the following article:
    http://blogs.technet.com/b/filecab/archive/2009/04/13/customizing-windows-server-backup-schedule.aspx
    So are you saying that even though I want each backup to go to a different place on the Shared Folder that it will replace the previous backup anyways?
    No and because of this I said in my previous post that with some modifications and additions you can do the scenario. For example, you back up to a shared folder with the name of Shared1 on Mondays. You also have been configured the backup feature to back
    up data on another shared folder, named Shared2, on Wednesday. When you repeat the backup operation in Shared1, only the backed up data that resides on it will be affected, and the data on Shared2 remains intact.
    Please feel free to let us know if you have any question or concern.
    Please VOTE as HELPFUL if the post helps you and remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading
    the thread.
    Hi R.Alikhani
    Then do you know if wbadmin supports incremental backup in Windows 8? As you said the VSS issue is fixed in Win8. However, the wbadmin has less options then in windows server. I tried a bit but it seems it only supports full backup? ps, I use a network share
    - will the incremental backup works if I define a ISCCI then? My remote backup PC is also running Win8.

  • GoldenGate Instalation?

    Hi there
    I need to install a GoldenGate solution for replication on my job.
    But, I installed a GoldenGate Veridata that controls the Jobs throught Groups of Objects, Nice.
    But I really need a Replication Software and I dont found the steps to download and install the correct option.
    Please send me a link or steps to make a correct download and the steps to install a GoldenGate Replication of databases.
    Thanks a lot!
    The installation that I need is on Windows 2003 R2 64Bits.
    Emerson
    Edited by: 847366 on 25/03/2011 05:54

    Okay.
    The installation guide for Oracle can be found here:
    http://download.oracle.com/docs/cd/E18101_01/doc.1111/e17799.pdf
    There are some extra steps here and there for Windows. For example:
    Itanium requirements
    To install Oracle GoldenGate on a Microsoft Itanium system, the vcredist_IA64.exe runtime
    library package must be installed. You can download this package from the Microsoft
    website. This package includes VisualStudio DLLs necessary for Oracle GoldenGate to
    operate on the Itanium platform. If these libraries are not installed, Oracle GoldenGate
    generates the following error.
    “The application failed to initialize properly (0xc0150002). Click on Ok
    to terminate the application.
    Third-party programs
    ● Before installing Oracle GoldenGate on a Windows system, install and configure the
    Microsoft Visual C ++ 2005 SP1 Redistributable Package. Make certain it is the SP1
    version of this package, and make certain to get the correct bit version for
    your server. This package installs runtime components of Visual C++ Libraries. For
    more information, and to download this package, go to http://www.microsoft.com.

  • EM12c Create non-cumulative incremental (level 1) backups against target DB

    Is this not available? It appears that we can only create cumulative RMAN incremental backups through Cloud 12c (this is against an 11gR2 DB). Are non-cumulative incrementals no longer supported?

    When scheduling a backup through the wizard, the last step displays the RMAN script the wizard has generated for you. Have you tried clicking the 'Edit RMAN Script' and removing the word "cumulative" from the generated script?
    I run backups using scripts stored in the recovery catalog so I don't use the click-through backup wizard, but does it not result in a differential incremental backup if you edit the script?

  • Getting error while taking MAX DB trans log backup.

    Hi,
    I am getting error while taking trans log backup of Maxdb database for archived log through data protector as below,
    [Critical] From: OB2BAR_SAPDBBAR@ttcmaxdb "MAX" Time: 08/19/10 02:10:41
    Unable to back up archive logs: no autolog medium found in media list
    But i am able to take complete data and incremental backup through data protector.
    I have already enabled the autolog for MAX DB database and it is writing that log file directly to HP-UX file system. Now i want to take backup of this archived log backup through data protector i.e. through trans log backup. So that the archived log which is on the file system after trans log backup completed will delete the archived logs in filesystem.  So that i don;t have to manually delete the archived logs from file system.
    Thanks,
    Subba

    Hi Lars,
    Thanks for the reply...
    Now i am able to take archive log backup but the problem is i can take only one archive file backup. Not multiple arhive log files generated by autolog at filesystem i.e /sapdb/MAX/saparch.
    I have enabled autolog and it is putting auto log file at unix directory i.e. /sapdb/MAX/saparch
    And then i am using the DataProtector 6.11 with trans log backup to backup the archived files in /sapdb/MAX/saparch. When i start the trans backup session through data protector it uses the archive stage command as "archive_stage BACKDP-Archive LOGBackup NOVERIFY REMOVE" If /sapdb/MAX/saparch has only one archive file it will backup and remove the file successfully. But if /sapdb/MAX/saparch has multiple archive files it gives an error as below,
      Preparing backup.
                Setting environment variable 'BI_CALLER' to value 'DBMSRV'.
                Setting environment variable 'BI_REQUEST' to value 'OLD'.
                Setting environment variable 'BI_BACKUP' to value 'ARCHIVE'.
                Constructed Backint for MaxDB call '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.
    bsi_in -c'.
                Created temporary file '/var/opt/omni/tmp/MAX.bsi_out' as output for Backint for MaxDB.
                Created temporary file '/var/opt/omni/tmp/MAX.bsi_err' as error output for Backint for MaxDB.
                Writing '/sapdb/data/wrk/MAX/dbm.ebf' to the input file.
                Writing '/sapdb/data/wrk/MAX/dbm.knl' to the input file.
            Prepare passed successfully.
            Starting Backint for MaxDB.
                Starting Backint for MaxDB process '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.
    bsi_in -c >>/var/opt/omni/tmp/MAX.bsi_out 2>>/var/opt/omni/tmp/MAX.bsi_err'.
                Process was started successfully.
            Backint for MaxDB has been started successfully.
            Waiting for the end of Backint for MaxDB.
                2010-09-06 03:15:21 The backup tool is running.
                2010-09-06 03:15:24 The backup tool process has finished work with return code 0.
            Ended the waiting.
            Checking output of Backint for MaxDB.
            Have found all BID's as expected.
        Have saved the Backup History files successfully.
        Cleaning up.
            Removing data transfer pipes.
                Removing data transfer pipe /var/opt/omni/tmp/MAX.BACKDP-Archive.1 ... Done.
            Removed data transfer pipes successfully.
            Copying output of Backint for MaxDB to this file.
    Begin of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
                #SAVED 1009067:1 /sapdb/data/wrk/MAX/dbm.ebf
                #SAVED 1009067:1 /sapdb/data/wrk/MAX/dbm.knl
    End of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
            Removed Backint for MaxDB's temporary output file '/var/opt/omni/tmp/MAX.bsi_out'.
            Copying error output of Backint for MaxDB to this file.
    Begin of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
    End of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
            Removed Backint for MaxDB's temporary error output file '/var/opt/omni/tmp/MAX.bsi_err'.
            Removed the Backint for MaxDB input file '/var/opt/omni/tmp/MAX.bsi_in'.
        Have finished clean up successfully.
    The backup of stage file '/export/sapdb/arch/MAX_LOG.040' was successful.
    2010-09-06 03:15:24
    Backing up stage file '/export/sapdb/arch/MAX_LOG.041'.
        Creating pipes for data transfer.
            Creating pipe '/var/opt/omni/tmp/MAX.BACKDP-Archive.1' ... Done.
        All data transfer pipes have been created.
        Preparing backup tool.
            Setting environment variable 'BI_CALLER' to value 'DBMSRV'.
            Setting environment variable 'BI_REQUEST' to value 'OLD'.
            Setting environment variable 'BI_BACKUP' to value 'ARCHIVE'.
            Constructed Backint for MaxDB call '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.bsi_
    in -c'.
            Created temporary file '/var/opt/omni/tmp/MAX.bsi_out' as output for Backint for MaxDB.
            Created temporary file '/var/opt/omni/tmp/MAX.bsi_err' as error output for Backint for MaxDB.
            Writing '/var/opt/omni/tmp/MAX.BACKDP-Archive.1 #PIPE' to the input file.
        Prepare passed successfully.
        Constructed pipe2file call 'pipe2file -d file2pipe -f /export/sapdb/arch/MAX_LOG.041 -p /var/opt/omni/tmp/MAX.BACKDP-Archive.1 -nowait'.
        Starting pipe2file for stage file '/export/sapdb/arch/MAX_LOG.041'.
            Starting pipe2file process 'pipe2file -d file2pipe -f /export/sapdb/arch/MAX_LOG.041 -p /var/opt/omni/tmp/MAX.BACKDP-Archive.1 -nowait >>/var/tmp/tem
    p1283767880-0 2>>/var/tmp/temp1283767880-1'.
            Process was started successfully.
        Pipe2file has been started successfully.
        Starting Backint for MaxDB.
            Starting Backint for MaxDB process '/opt/omni/lbin/sapdb_backint -u MAX -f backup -t file -p SAPDB.13576.1283767878.par -i /var/opt/omni/tmp/MAX.bsi_
    in -c >>/var/opt/omni/tmp/MAX.bsi_out 2>>/var/opt/omni/tmp/MAX.bsi_err'.
            Process was started successfully.
        Backint for MaxDB has been started successfully.
        Waiting for end of the backup operation.
            2010-09-06 03:15:25 The backup tool process has finished work with return code 2.
            2010-09-06 03:15:25 The backup tool is not running.
            2010-09-06 03:15:25 Pipe2file is running.
            2010-09-06 03:15:25 Pipe2file is running.
            2010-09-06 03:15:30 Pipe2file is running.
            2010-09-06 03:15:40 Pipe2file is running.
            2010-09-06 03:15:55 Pipe2file is running.
            2010-09-06 03:16:15 Pipe2file is running.
            Killing not reacting pipe2file process.
            Pipe2file killed successfully.
            2010-09-06 03:16:26 The pipe2file process has finished work with return code -1.
        The backup operation has ended.
        Filling reply buffer.
            Have encountered error -24920:
                The backup tool failed with 2 as sum of exit codes and pipe2file was killed.
            Constructed the following reply:
                ERR
                -24920,ERR_BACKUPOP: backup operation was unsuccessful
                The backup tool failed with 2 as sum of exit codes and pipe2file was killed.
        Reply buffer filled.
        Cleaning up.
            Removing data transfer pipes.
                Removing data transfer pipe /var/opt/omni/tmp/MAX.BACKDP-Archive.1 ... Done.
            Removed data transfer pipes successfully.
            Copying output of Backint for MaxDB to this file.
    Begin of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
    End of output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_out)----
            Removed Backint for MaxDB's temporary output file '/var/opt/omni/tmp/MAX.bsi_out'.
            Copying error output of Backint for MaxDB to this file.
    Begin of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
    End of error output of Backint for MaxDB (/var/opt/omni/tmp/MAX.bsi_err)----
            Removed Backint for MaxDB's temporary error output file '/var/opt/omni/tmp/MAX.bsi_err'.
            Removed the Backint for MaxDB input file '/var/opt/omni/tmp/MAX.bsi_in'.
            Copying pipe2file output to this file.
    Begin of pipe2file output (/var/tmp/temp1283767880-0)----
    End of pipe2file output (/var/tmp/temp1283767880-0)----
            Removed pipe2file output '/var/tmp/temp1283767880-0'.
            Copying pipe2file error output to this file.
    Begin of pipe2file error output (/var/tmp/temp1283767880-1)----
    End of pipe2file error output (/var/tmp/temp1283767880-1)----
            Removed pipe2file error output '/var/tmp/temp1283767880-1'.
        Have finished clean up successfully.
    The backup of stage file '/export/sapdb/arch/MAX_LOG.041' was unsuccessful.
    2010-09-06 03:16:26
    Cleaning up.
        Have encountered error -24919:
            Can not remove file '/var/tmp/temp1283767880-0'.
            (System error 2; No such file or directory)
        Could not remove temporary output file of pipe2file ('/var/tmp/temp1283767880-0' ).
        Have encountered error -24919:
            Can not remove file '/var/tmp/temp1283767880-1'.
            (System error 2; No such file or directory)
        Could not remove temporary output file of pipe2file ('/var/tmp/temp1283767880-1' ).
    Have finished clean up successfully.
    Thanks,
    Subba

  • Move PSE6 catalog from XP to new computer Windows 7

    Help!  I have a new computer with Windows 7 installed.  The store I purchased it from did a data transfer for me so the "My Pictures" folder was moved into the new computer.  I just downloaded the PSE6 program again as programs were not moved.
    Now what do I do??  I have an External Hard Drive connected to the old computer and had made a full and 3 incremental backups through PSE6.  (I have 127gb of photos as shoot in raw).
    I have read some info on this site so am confused..... should I delete the "My Pictures" (all the photos) from the new computer first?
    I have an External hard drive connected to the old computer.  Should I do a FULL backup and send it to that External HD then plug that into the new computer and download from that?
    Would appreciate your help (in simple precise terms - not very computer literate! ) as I don't want to lose my Catalog as it has a lot of collections.
    Thank you!

    First make sure you've got a precise record of what your key is and who it's registered to. (I usually just print out a screenshot of the existing "Register" tab in my QuickTime preferences.)
    Now on the old PC, go into your QuickTime control panel. In the "Register" tab, clear the key from the QuickTime Registration section and click OK.
    Now on the new PC, reenter those details in the "Register" tab and click OK.

  • Secunia & Adobe Acrobat Pro Extended 9.3.1

    Hi,
    According to Secunia, my updated version of Adobe Acrobat Extended Pro 9.3.1 is insecure.
    I have clicked on the Download Solution button on Secunia to obtain the latest Adobe Acrobat Version. After loading the latest version, I still have the following information.
    Secunia points to the following file: C:\Program Files\Adobe\Acrobat 9.0\Acrobat\Acrobat.dll
    And reports that my version is: 9.3.0.148
    Yet, when I fire up my Adobe Acrobat Extended and then go Help | About, I see the version is 9.3.1.
    When Adobe updates Acrobat, does it leave insecure files behind?  Is there anything I can do to ensure that my version of Adobe Acrobat is completely secure?  How do I get Secunia to recognize my updated and, I hope, secure version?
    Sincerely,
    Kevin

    Hi Bill,
    I suspect that its latest version 9.3.1 is secure, at least for now.
    My concern is that, for whatever reason, Secunia continues to think I have a prior version (9.3.0.148).
    I had a similar problem with Flash.  To solve my Adobe Flash problem, I used an Adobe uninstaller to completely remove Flash and then reinstall the latest version from Adobe web site.
    I can't do that with my Adobe Acrobat Pro Extended.  I need to get incremental upgrades through the upgrade process.
    I have a hunch that the upgrades don't remove all the prior detritus from prior versions and that is tripping up the Secunia security.
    I am hoping someone else has encountered the same issues and has found a solution.
    Sincerely,
    Kevin

  • Equals and lazy loading

    Hi all,
    We have an application in which we use lazy loading on our client, so we can't be sure that both objects have the same objects reference loaded already. While implementing the equals method I came to the problem if it makes sense to compare the lazy loaded objects as well, because this might lead to a very deep object comparison through the whole object tree which might be very unperformant if I load this references on demand. Otherwise not comparing the references might be an incomplete comparision of these objects and makes the usage of the equals method very difficult.
    Has anyone experience using the equals method with lazy loading?
    Thanks in advance
    Marco

    Okay - here comes a little more detail about our architecture to give you a deeper understanding. Currently we are working with EJB3 / JBoss environment but our intention is to encapsulate this through a DAO-layer. Anyways we have special object-dependencies as per example a customer has it�s orders. So in my understanding a customer with the an address, a name etc. with NO orders is not the same as a customer with the same address and the same name and let�s say 2 orders. So I can stick with comparing simply the name and the address, but this wouldn�t give you an exact comparison.
    The other way would be to compare the orders as well, but what if the orders have a number of invoices behind them. The conclusion would be, that I�d have to compare them as well. Additionally I have the problem, that I have totally equal customers with the same number of orders and invoices, but the first one has a null-value in its orders, because they have not been loaded from database yet and the other one has. So my technique with comparing the lazy-loaded references would not work here. A complete loading of all references for a comparison cannot be the solution here either.
    So it is hard to say if they are I/O intensive. Worst case would be, that we have to load the complete object-tree for comparison from database and send them between client and server (jboss). This is part of our question? Should this be the consequence of lazy loading or is it best pratice to stick with the local attributes and let the client do the work, so that he has to work through the whole object-tree if he wants to know the exact equality.
    Does this give you a more detailled view of our question? I guess I am not the first to deal with this problem, so I�d like to share your experiences with you about what is "wise" to do.
    Thanks for your help
    Marco

Maybe you are looking for