OATM Migration utility fails LONG column

Execute Migration Commands
Migration processes for tables with LONG and LONG RAW columns don't start in the background.
No log is generated.
In FNDTSMIG.log I found "sh: syntax error at line 1: `(' unexpected "
Any ideas ?

Hi,
See if this doc is helpful
Running OATM Migration Utility Fails With "sh: syntax error at line 1: `(' unexpected " [ID 552302.1]
Regards
Yoonas

Similar Messages

  • OATM (Oralce Appliation Tablespace Migration) Utility is taking forever.

    Hi All,
    I am not sure how to finish OATM migration utility in timely manner. I can't afford days of downtime to do OATM migration. Is there any advise on doing this migration? I am running this utility right now and it is doing 2% a day. I am still waiting @ 4% done satge. Not sure how long will it take to finish?
    Thanks,
    -Hansal
    Edited by: user10751491 on Jun 26, 2012 9:10 AM

    Please post the details of the application release, database version and OS along with the hardware resources you have on the database server.
    What is the procedure you follow to migrate to OATM?
    Are you trying this in a test instance?
    Have you reviewed all OATM docs? -- https://forums.oracle.com/forums/search.jspa?threadID=&q=OATM&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    My OATM Migration Appears to Have Hung. How Can I Monitor It's Progress? [ID 1073783.1]
    Oatm Conversion Is Hanging While Generating Migration Command [ID 445020.1]
    Oatm Migration Is Running Very Long On Fnd_lobs_ctx Index For More Than 35hrs, How to speed Up This OATM Migration Step? [ID 1362261.1]
    Thanks,
    Hussein

  • 9.3.1 Migration Utility gives "Project does not exists" error in log

    In preparation for using Hyperion Reporting and Analysis 9.3.1, I have installed the following on a Linux test server:
    --Oracle App Server 10.1.3
    --Oracle Database 10g XE
    --Hyperion shared services 9.3.1
    --Hyperion UI Services 9.3.1
    --Hyperion SQR Production Reporting 9.3.1
    --Hyperion Reporting and Analysis, Financial Reporting 9.3.1
    Everything is up and running and I created a new user account in Shared Services to administer the system, and attempted to give it full access by provisioning it with all the available roles. I received an error message stating
    "+OracleJSP: An error occurred. Consult your application/system administrator for support. Programmers should consider setting the init-param debug_mode to "true" to see the complete exception message.+"
    I'm investigating that now, but the roles seemed to be assigned to the user anyway.
    When attempting to migrate from HPSu 8.5.2 to Reporting and Analysis 9.3.1 using the 9.3.1 Linux x86 Migration Utility (V11333-01 from edelivery.oracle.com), I get the following error message in a test migration:
    The migration process has been failed on 'User Defined Roles' migration step*
    I set the log level to debug and looked it over and here are the error details:
    +\[09 16 10:49:29,080] \[ERROR ] The migration process has been failed on 'User Defined Roles' migration step+
    +\[09 16 10:49:29,081] \[ERROR ] com.hyperion.interop.lib.OperationFailedException: Project does not exists by this id '%201252935352214.11'.+
    com.hyperion.pmt.migrator.common.CommonException
    at com.hyperion.pmt.migrator.userprovision.tool.MigrationManagerImpl.doAssignDefaultGroup(MigrationManagerImpl.java:864)
    at com.hyperion.pmt.migrator.userprovision.tool.MigrationManagerImpl.startMigration(MigrationManagerImpl.java:751)
    at com.hyperion.pmt.migrator.userprovision.ui.panels.MigrationPanel$ProcessRunner.run(MigrationPanel.java:304)
    Caused by: com.hyperion.interop.lib.OperationFailedException: Project does not exists by this id '%201252935352214.11'.
    at com.hyperion.interop.lib.helper.AdminProjectHelper.getProjectNameByID(Unknown Source)
    at com.hyperion.interop.lib.CMSClient.getProjectNameByID(Unknown Source)
    at com.hyperion.pmt.migrator.userprovision.tool.MigrationManagerImpl.getDefaultGroupBundle(MigrationManagerImpl.java:824)
    at com.hyperion.pmt.migrator.userprovision.tool.MigrationManagerImpl.doAssignDefaultGroup(MigrationManagerImpl.java:861)
    +... 2 more+
    We've only one user-defined role in the source system that I don't think is giving us this issue, it is inactivated and "User Roles" was not one of the selected objects to migrate.
    I would like to try taking the role completely out of the source system if possible, preferably through SQL applied to the V8 tables or through the SDK.
    The error looks like it lies in the migration task following the user role provisioning, doAssignDefaultGroup* and is caused by getProjectNameByID*
    The steps I have taken (for sanity sake) so far are:
    --I've removed all associations with this user defined role in the source system (users, groups, and roles) using the administrative interface on the web.
    --I've unchecked 'User Defined Roles' on the list objects to migrate from the source to target systems in the Migration Utility, but I still get the error.
    It looks like it is trying to provision a user defined role in the target system, but it is not finding a default project by the id in the error message.
    Are there any Shared Services steps in the target system that need to be taken besides creating a manager type user to use as a log in for the Migration Tool?

    I turned on the jsp debug init-param for the system for more information about the error. I get the following:
    Request URI:/interop/jsp/css/CSSTemplate.jsp
    Exception:
    OracleJSP:oracle.jsp.parse.JspParseException: /jsp/css/provisionSummary.jsp: Line # 119, <bean:define id="appId" name="appName" property="key"/>
    Error: Tag attempted to define a bean which already exists: appId
    When I attempt to provision the user I created for administrative purposes, Ialso see the following:
    --from the file SharedServices_Metadata.log I see the error:
    +2009-09-18 15:55:32,399 \[AJPRequestHandler-HTTPThreadGroup-6\] ERROR com.hyperion.eie.common.cms.CMSFacade - org.apache.slide.structure.ObjectNotFoundException: No object found at /users/admin+
    --from SharedServices_Admin.log
    +2009-09-17 14:49:20,583 \[Thread-13\] ERROR com.hyperion.cas.server.CASAppRegistrationHandler.loadApplicationsFromCMS(CASAppRegistrationHandler.java:430) - AuthorizationException occured when loading applications : Refer log in debug mode for details+
    How does one set these logs into debug mode ?

  • Migration Assistant fails after showing 157 KB or 0 KB to transfer

    I spent 4 days trying to get OS X Migration Assistant to successfully migrate between two desktop machines, both running Mavericks 10.9.3.
    The two machines were connected by an ethernet cable, and all migrations would fail after 16-24 hours, with the progress bar stalling out at "Less than one minute remaining".
    Before every failed transfer, Migration Assistant would inaccurately show "157 KB selected" for transfer if the "Computer & Network Settings" option was checked, or "0 KB selected" for transfer if only the "Applications" or "Documents & Data" options were checked.
    I finally realized that the migrations were failing because I was opening and clicking through Migration Assistant on both the source and target machines at around the same time.
    The eventual solution was to open Migration Assistant on the source machine first, and click through as far as I could until the source machine was actively looking for other machines on the network.
    As long as I waited until then to open Migration Assistant on the target machine, it would correctly calculate the size of the files to be transferred and the migration would succeed.
    So, the order in which you open Migration Assistant seems to make a big difference: source first, then target.
    My problem is solved, but only after several days of hassle, so I wanted to post this in case other people had a similar issue.
    (I tried many suggested fixes that did not work, included reinstalling OS X on the target machine, running disk repair and fixing permissions on both source and target, changing file sharing on the source machine to include the whole drive, trying to migrate Applications and Documents & Data separately, turning off wifi before the transfer, etc.)

    Thanks for posting this.  I was having the same problem.  You saved me hours, maybe days, of frustration!

  • Imp-00020 long column too large for column buffer size (22)

    Hi friends,
    I have exported (through Conventional path) a complete schema from Oracle 7 (Sco unix patform).
    Then transferred the export file to a laptop(window platform) from unix server.
    And tried to import this file into Oracle10.2. on windows XP.
    (Database Configuration of Oracle 10g is
    User tablespace 2 GB
    Temp tablespace 30 Mb
    The rollback segment of 15 mb each
    undo tablespace of 200 MB
    SGA 160MB
    PAGA 16MB)
    All the tables imported success fully except 3 tables which are having AROUND 1 million rows each.
    The error message comes during import are as following for these 3 tables
    imp-00020 long column too large for column buffer size (22)
    imp-00020 long column too large for column buffer size(7)
    The main point here is in all the 3 tables there is no long column/timestamp column (only varchar/number columns are there).
    For solving the problem I tried following options
    1.Incresed the buffer size upto 20480000/30720000.
    2.Commit=Y Indexes=N (in this case does not import complete tables).
    3.first export table structures only and then Data.
    4.Created table manually and tried to import the tables.
    but all efforts got failed.
    still getting the same errors.
    Can some one help me on this issue ?
    I will be grateful to all of you.
    Regards,
    Harvinder Singh
    [email protected]
    Edited by: user462250 on Oct 14, 2009 1:57 AM

    Thanks, but this note is for older releases, 7.3 to 8.0...
    In my case both export and import were made on a 11.2 database.
    I didn't use datapump because we use the same processes for different releases of Oracle, some of them do not comtemplate datapump. By the way, shouldn't EXP / IMP work anyway?

  • No progress seen in OATM migration

    Hi Gurus,
    I am performing OATM migration on my 11.5.10.2 applications with 10.2.0.4 database.
    I see the status is 99.96% completed -
    Generating Migration progress report for all schemas. Please wait...
    Migration Progress Report
    Report Date : April 13, 2011 PA
    GE: 1
    Total No. Commands Commands % completion
    of commands in error in success of migration
    118,161 28 118,112 99.96%
    SQL> select count (*) from fnd_ts_mig_cmds where MIGRATION_STATUS='GENERATED';
    COUNT(*)
    5067
    I see there are still 5067 commands with GENERATED status and which are yet to be executed. I have been waiting for almost a day and this 5067 count never reduces further.
    I even tried to restart the queue TBLMIG_MESSAGEQUE and bounced DB, then restarted OATM utility.. no luck though.
    Can some one help me on this please ?
    Thanks,
    Khan

    Hi Helios,
    The issue was fixed.
    All the left over commands in GENERATED status were to Enable constraint, policy, trigger, queues, etc.
    I executed the post migration script to to Enable constraint, policy, trigger, queues, etc which brought the count down.
    Thanks for the response Helios.
    Regards,
    Khan.

  • EBS OATM Migration

    Has anyone migrated a 300gb or larger database to OATM? Are there any known performance issues after the database has been migrated to OATM?

    When running the tablespace creation scripts within the migration utility the crtts.sql is generated containing the sql statements below. It looks odd, since it is only creating 1 tablespace and altering the others. Shouldn't it create 12 tablespaces?
    connect SYSTEM/&1@&2
    exec DBMS_APPLICATION_INFO.SET_MODULE('TS_CREATE_TABLESPACE', NULL);
    CREATE TABLESPACE APPS_TS_TOOLS
    DATAFILE '/u01/APPS_TS_TOOLS01.dbf'
    SIZE 100 M
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024 K
    SEGMENT SPACE MANAGEMENT AUTO;
    ALTER TABLESPACE APPS_TS_TX_DATA ADD DATAFILE '/u01/APPS_TS_TX_DATA04.dbf' SIZE 100
    M ;
    ALTER TABLESPACE APPS_TS_TX_IDX ADD DATAFILE '/u01/APPS_TS_TX_IDX05.dbf' SIZE 100 M
    ALTER TABLESPACE APPS_TS_SEED ADD DATAFILE '/u01/APPS_TS_SEED03.dbf' SIZE 100 M ;
    ALTER TABLESPACE APPS_TS_INTERFACE ADD DATAFILE '/u01/APPS_TS_INTERFACE02.dbf' SIZE
    100 M ;
    ALTER TABLESPACE APPS_TS_SUMMARY ADD DATAFILE '/u01/APPS_TS_SUMMARY02.dbf' SIZE 100
    M ;
    ALTER TABLESPACE APPS_TS_NOLOGGING ADD DATAFILE '/u01/APPS_TS_NOLOGGING02.dbf' SIZE
    100 M ;
    ALTER TABLESPACE APPS_TS_ARCHIVE ADD DATAFILE '/u01/APPS_TS_ARCHIVE02.dbf' SIZE 100
    M ;
    ALTER TABLESPACE APPS_TS_QUEUES ADD DATAFILE '/u01/APPS_TS_QUEUES02.dbf' SIZE 100 M
    ALTER TABLESPACE APPS_TS_MEDIA ADD DATAFILE '/u01/APPS_TS_MEDIA02.dbf' SIZE 100 M ;
    exit;

  • ORA-01461: can bind a LONG value only for insert into a LONG column - Issue

    We are getting an error from Oracle DB --- Cause: java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
    The application was working fine with Oracle 10.2.0.4 and ojbc14 driver 10.2.0.1 Error occurs only after an upgrade to Oracle 10g 10.2.0.5. So after some googling, we found that a driver upgrade would eliminate this error. So the 10.2.0.5 version of the driver was used. But the ORA error still occurs. The readme of ojdb14.jar specified this - BUG 8847022 - ORA-01461: CAN BIND A LONG VALUE ONLY FOR INSERT INTO A LONG COLUMN
    The problem is that we are not able to reproduce this using a sample program however it is happening consistently in the client environment.
    we get the error ORA-01461, when the oracle version is upgraded to 10.2.0.5. This error occurs when we try to insert a file(BLOB) data with the file length greater than 4KB to a table.
    Exception trace
    uncategorized SQLException for SQL []; SQL state [72000]; error code [1461];
    --- The error occurred in nl/sss/gict/mcb/data/dao/config/StateQueries.xml.
    --- The error occurred while applying a parameter map.
    --- Check the setReceivedFile-InlineParameterMap.
    --- Check the statement (update failed).
    --- Cause: java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
    ; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
    --- The error occurred in nl/sss/gict/mcb/data/dao/config/StateQueries.xml.
    --- The error occurred while applying a parameter map.
    --- Check the setReceivedFile-InlineParameterMap.
    --- Check the statement (update failed).
    --- Cause: java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
    Oracle version installed in acceptance:
    1 Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
    2 PL/SQL Release 10.2.0.5.0 - Production
    3 CORE 10.2.0.5.0 Production
    4 TNS for Linux: Version 10.2.0.5.0 - Production
    5 NLSRTL Version 10.2.0.5.0 - Production

    Is the server running Java 1.4? If it is Java 5 or higher there is no need to keep using OJDBC14, you can upgrade to OJDBC5.
    If you cannot do that, then I ask you: why make this post? What are you expecting someone to do? Your problem is with the OJDBC driver, you are not going to get tech support for it in this Java programming forum.

  • ORA-01461: can bind a LONG value only for insert into a LONG column-Update

    Hi,
    I'm using Oracle 9.2 with Weblogic 8 server. There are two columns OBJ_EVIDENCE_COMP - varchar2(4000 bytes), Descriptiion - varchar2(4000 bytes) in a table.I'm getting the Data from that table and again I'm updating into the same table with same data after updating the Data object in java.
    I am getting the following error
    "Java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column"
    Can some one let me know why this error occurs ? Please do let me know if you want any other information. Below is my SQL Query:
    * @jc:sql statement::
    * UPDATE CORRECTIVE_ACTION SET
    * CA_ID = {dt.caId},
    * CA_NBR = {dt.caNbr},
    * CAPA_PLAN_ID = {dt.capaPlanId},
    * OBJ_EVIDENCE_COMP = {dt.objEvidenceComp},
    * APPLICABLE_ELSE_WHERE = {dt.applicableElseWhere},
    * JUSTIFICATION = {dt.justification},
    * MOE = {dt.moe},
    * COMPLETION_DATE = {dt.completionDate},
    * EXTENSION_DUE_DATE = {dt.extensionDueDate},
    * STATUS_CD = {dt.statusCd},
    * SYSTEM_STATUS_CD = {dt.systemStatusCd},
    * ROOT_CAUSE_CD = {dt.rootCauseCd},
    * DESCRIPTION = {dt.description},
    * CA_TYPE = {dt.caType},
    * CREATED_BY = {dt.createdBy},
    * CREATED_DATE = {dt.createdDate},
    * MODIFIED_BY = {dt.modifiedBy},
    * MODIFIED_DATE = {dt.modifiedDate},
    * COMPLETION_DUE_DATE = {dt.completionDueDate}
    * WHERE CA_ID = {dt.caId}
    In the above update statement if i remove one among the 2 columns mentioned then it is getting updated properly......
    Regards,
    Bharat
    Edited by: 908508 on Jan 17, 2012 2:18 AM

    I am occasionally getting this error in an Oracle 11g database
    I use Rogue Wave to insert:
    connection.beginTransaction ("bulkInsertEvents");
    RWDBTBuffer <RWCString> symbols (symbol, rowCount);
    RWDBTBuffer <RWDateTime> timeStamps (timeStamp, rowCount);
    RWDBTBuffer <int> eventCounts (eventCount, rowCount);
    RWDBTBuffer <RWCString> events (event, rowCount);
    RWDBBulkInserter ins = table.bulkInserter (connection);
    ins << symbols << timeStamps << eventCounts << events;
    ins.execute ();
    connection.commitTransaction ("bulkInsertEvents");
    catch (RWxmsg & exception)
    cout << Logging::getProgramName () << " " << exception.why () << endl;
    throw "Failed to do bulk insert events to DBTools.";
    Some of the inserts give me
    "[SERVERERROR] ORA-01461: can bind a LONG value only for insert into a LONG column"
    the table structure is
    SYMBOL     VARCHAR2(33 BYTE)
    DATEANDTIME     TIMESTAMP(6)     
    NUMOFEVENTS     NUMBER     
    EVENTS     VARCHAR2(4000 BYTE)

  • Can bind a LONG value only for insert into a LONG column

    I got an exception when I was using sesame adapter to dump a turtle file which contains long texts as objects into oracle semantic database. The exception information is:
    org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
    ORA-06512: in "SF.ORACLE_ORARDF_ADDHELPER", line 1
    ORA-06512: in line 1
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:439)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:395)
         at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:802)
         at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:436)
         at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:186)
         at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:521)
         at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:202)
         at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:1005)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1307)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3449)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3530)
         at oracle.jdbc.driver.OracleCallableStatement.executeUpdate(OracleCallableStatement.java:4735)
         at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1350)
         at oracle.spatial.rdf.client.sesame.OracleSailConnection.addStatement(OracleSailConnection.java:1976)
         at org.openrdf.repository.sail.SailRepositoryConnection.addWithoutCommit(SailRepositoryConnection.java:249)
         at org.openrdf.repository.base.RepositoryConnectionBase.add(RepositoryConnectionBase.java:510)
         at org.openrdf.repository.util.RDFInserter.handleStatement(RDFInserter.java:193)
         at org.openrdf.rio.turtle.TurtleParser.reportStatement(TurtleParser.java:963)
         at org.openrdf.rio.turtle.TurtleParser.parseObject(TurtleParser.java:416)
         at org.openrdf.rio.turtle.TurtleParser.parseObjectList(TurtleParser.java:339)
         at org.openrdf.rio.turtle.TurtleParser.parsePredicateObjectList(TurtleParser.java:315)
         at org.openrdf.rio.turtle.TurtleParser.parseTriples(TurtleParser.java:301)
         at org.openrdf.rio.turtle.TurtleParser.parseStatement(TurtleParser.java:208)
         at org.openrdf.rio.turtle.TurtleParser.parse(TurtleParser.java:186)
         at org.openrdf.rio.turtle.TurtleParser.parse(TurtleParser.java:131)
         at org.openrdf.repository.base.RepositoryConnectionBase.addInputStreamOrReader(RepositoryConnectionBase.java:404)
         at org.openrdf.repository.base.RepositoryConnectionBase.add(RepositoryConnectionBase.java:295)
         at org.openrdf.repository.base.RepositoryConnectionBase.add(RepositoryConnectionBase.java:226)
         at sforcl.dao.support.OracleSailDaoTemplate.addTTLFile(OracleSailDaoTemplate.java:114)
         at sforcl.test.OracleSailDaoTemplateTest.testAddTTLFile(OracleSailDaoTemplateTest.java:33)
         at sforcl.test.OracleSailDaoTemplateTest.main(OracleSailDaoTemplateTest.java:122)
    How can I solve this problem?

    Hi,
    Can you please try loading the same file following Example 5 in Section 8.10.5 of
    http://docs.oracle.com/cd/E11882_01/appdev.112/e25609/sem_sesame.htm
    Thanks,
    Zhe

  • Error while running tablespace migration utility.

    Hi,
    Am migrating from 11.5.9 to 11.5.10.2
    while running the OATM utility $FND_TOP/bin/fndtsmig.pl it prompts for the connecting string and as soon as I give my database name(i.e connecting string) it is exiting with error as
    Unable to get the host of the database from <SID>
    And am trying to run it with the user as applmgr. Where am I wrong.....??
    Thanks in advance,
    Patel

    Duplicate thread ..
    Error while running tablespace migration utility.

  • Compare long columns

    Hi,
    How to compare two LONG columns ?
    Is there a standard methode ?
    Nicolas.

    Hi,
    I found myself a solution :
    Create table to migrate long to clob
    after what, I can use package dbms_lob.compare
    Nicolas.

  • RV042 Migration Utility

    Hi,
    I need to download the RV042 migration utility for transferring RV042V V1/V2 settings to RV042 V3 settings but cant seem to find it on the download pages.
    Any ideas on where I could find it ?
    Regards
    Pippo

    Hello,
    The Migration utility is removed from cisco site a long time ago. It is no longer supported software by Cisco. 
    You may use it, but we cannot guarantee that you will successfully convert the configuration file. RV042 V1 and v3 differe by hardware and software. Though version 3 inherit v1 they don't have many in common and in order to avoid problem, we advise to configure it manually than to convert the configuration
    Regards,
    Kremena

  • My Macintosh HD partition is damaged and the disk utility failed to repair it

    I cannot boot anymore after I tried to repair the disk with Disk Utility after my Mac became very slow.
    This is the second time it happened to me in a week with the same symptoms. When I try to boot, I see a gray screen and the computer just stops
    When I use the recovery partition, the disk utility fails to repair the disk, stopping after failing to fix the catalog file.
    I started to use TM to backup my data after erasing the disk and reinstalling Mountain Lion but I wonder if my hard drive is damaged because the same problem happened twice. (Thanks TM for keeping my data safe this time)
    Any advice or ways to fix that once and for all ?
    PS : The worst is that i can still boot with Bootcamp to Windows, not very useful though

    rpignard wrote:
     Any advice or ways to fix that once and for all ?
    Since your Recovery HD and Windows partition work fine, the problem is located in your Macintosh HD partition and I very highly suspect it's bad/failing sectors that's to blame for your troubles.
    This can sometimes occur if the computer was moved while the hard drive was in operation, else sectors fail all by themselves, or if the drive is having mechanical issues, which you problem will continue no matter what you do.
    This time when you use Recovery > Disk Utility to erase the Macintosh HD partition, use the middle secure erase option, not the far left or right options (the right will work also just take a really long time)
    What your doing is with the middle secure erase option is called a "Zero" write, which Disk Utility will write 0's over all the bits in that partition. When it's done it will read it back for confirmation and when it does so, any bits that are failing will be mapped off by the drive. Hopefully this will resolve the issue without needing a drive replacement.
    Once that is completed, reinstall OS X from Recovery, any programs from original sources, files only from backup and your problems hopefully should disappear.
    What happens is if your data gets corrupted on the boot drive by failing sectors, it gets transfered to TimeMachine (or to bootable clones) and when you restore your right back to square one. So it's important to rule out that possibility with original installs of OS X and third party programs.
    If your problems continue, then it's possibly a problem  with third party at boot kernel extension files, (and the catalog problem was yet another one) so to solve this you need to make sure all your third party software is updated and works with your current OS X version.
    A trick is, if you get gray screen at boot time, is to hold the Shift Key while booting the computer, this will disable a lot of things and allow you to fix the third party software issue.
    What I suggest you do is have multiple backup systems, including hold option key bootable clones, this way if your TM drive gets corrupted you can go back in time to a saved state on the clone, restore that and then your files from the TM drive.
    Most commonly used backup methods

  • Migration Assistant Fails during X.4 to X.5 Upgrade

    I'm trying to upgrade from X.4 to X.5.
    I did an Archive and Install, and Migration Assistant fails.
    It firstly fails with some Address Book backup files.
    These are located in: david/Library/Application Support/AddressBook/Address Book Backup
    With filenames such as: Address Book - 01/02/2008
    The error is "File name too long (63)"
    I removed the whole folder and tried again, and it failed with 'Invalid Argument'
    I ended up restoring X.4 from a disk image.
    I then tried installing X.5 on a bootable external FireWire drive with GUID partition map, migrating from the internal hard drive.
    This also fails at the same point.
    Any suggestions?
    Many TIA
    David

    It would appear that the Address Book backup file was corrupt. The size indicated was around 180MB, rather than around 10MB! The visible file name looked OK.
    Ironically, X.5 was able to copy the file to a new location. It complained about file name length, but copied it and reduced the file size to 9MB. Rather a shame that Migration Assistant doesn't use the same copy facilities.
    What has this taught me?
    1. BACKUP! Create a disc image of the full hard disc. OK, it's said elsewhere, but it really can't be said too often!
    2. Set up an Administrator/admin account. One that's essentially not used, but can access all Applications, networks and settings. If I had done this, I could have migrated this account as part of the installation process, and worried about migrating the main User account separately, using Migration Assistant.
    However, I now have a new problem!
    For some reason, I can't access the wireless network I've been using. I can see the neighbour's network(!), but not my own. I'm using WEP security.
    Any ideas?
    Many thanks,
    David

Maybe you are looking for

  • Submit report in background

    Hai friends, Requirement is " I need to submit the report in background and program should execute only on every monday of the week. Pl do the needful and clarify with hard coding. Good answers expect good points. Regards, Vamsykrishna.

  • Adobe Acrobat Pro 9 License Problems

    We have successfully installed Acrobat Pro 9.3 on several Windows XP machines, but on others when we try to install we get an error stating that we have an invalid serial number.  It is an enterprise serial number, and like I said does work for most

  • Storage Groups

    Hello, When booting from SAN in UCS, what's the best practice when creating the Storage Groups in the disk array? For instance, VMware: is it best-practice to have one storage group for each ESXi and add its own ESXi Boot LUN (id=0) plus the VM datas

  • WebLogic Server v6.1 Beta

              Dear Beta Tester,           Thank you for your participation in BEA's beta program for the WebLogic Server           v6.1. We appreciate your help in assuring that BEA continues to release the highest           quality application server.  

  • I have installed Mountain Lion. I hate it! All my files in documents that were Word or Excel will not open. I am not that computer savy so this is a real problem

    I can't open my old files with Mountain Lion. This is a huge problem. What can I do