" .... Is corrupt on Media"

I get this error in my Backup Exec Logs when performing a filesystem backup using Verison 9.20 Revision 1401. I am specifically not using the Open File Manager. Volume appears fine otherwise, no hardware issues, etc. Are the seriesw of files really corrupt? How does the tape software know? Is the error message to be ignored?
The OS version is 6.5 SP8 with the latest NSS Rev B paches.

Originally Posted by netware6guru
I get this error in my Backup Exec Logs when performing a filesystem backup using Verison 9.20 Revision 1401. I am specifically not using the Open File Manager. Volume appears fine otherwise, no hardware issues, etc. Are the seriesw of files really corrupt? How does the tape software know? Is the error message to be ignored?
The OS version is 6.5 SP8 with the latest NSS Rev B paches.
We typically see these on files which were modified during the backup - for example GW agent logs. So the severity depends on the files involved.
What were they?
Also there is no substitute for testing the recovery process. Verify the backup, restore it, and make sure the error is really cosmetic.
-- Bob

Similar Messages

  • Mailed PDF files arrive on Windows platform as corrupt Windows Media files

    Recently I mailed some .pdf files to 3 of the Windows machines on our network. Each of them reported that when attempting to open the .pdf file on the Windows machine, their system tried to use Windows Media instead of Adobe Reader. Of course they received an error message. When they saved the .pdf files on their machines, the files opened correctly in Adobe Reader.
    These are files that I downloaded from the State of Oregon. After we had problems there, I mailed them some of the .pdf files I had created. Same results. The Windows machine treats them as corrupt Windows Media files.
    I am the only Mac in our company. Of course the problem must be mine!?!
    I am using MS Entourage as my mail client.
    Suggestions? Comments? Answers?
    Thanks!!

    Kappy,
    No problem! Your suggestions were what made me consider the extension option in Entourage's preferences. Unfortunately, it didn't work.
    Your suggestion also prompted me to try the same thing using Mail. Mail worked just fine, so the problem is in how Entourage handles the .pdf file being sent to a Windows computer where it will be opened either in Outlook or Internet Explorer.
    It seems that Somewhere along the line, the .pdf extension is being changed to a .png extension which triggers the launch of Windows Media Browser.
    I just rebuilt the Entourage database and I'll see what happens now.
    Whit

  • Media Encoder CS4 won't launch

    Hello,
    I am trying to use Media Encoder CS4 but it will not launch. It seems to get hung up on "ImporterQT.bundle".
    I am on a Mac G5 Dual 2.5 GHz Power PC running OSX 10.5.7
    Please help! I'm on a really tight deadline.
    Thanks!

    ImporterQT.bundle may be missing or corrupt
    uninstall Media Encoder using the package in \Applications\Utilities\Adobe Installers
    then reinstall.

  • Media Encoder CS4 poor CPU use

    A friend of mine made a 3 min video in Premiere CS4 and added some color corrections using Magic Bullet Looks, and when she try to export MPEG-2 with Media Encoder, the processor use in task manager is about 15 %, and RAM about 2 GB, after some time it goes to 50 % but soon drop to 1-2%
    The rendering time for 3 min video is about 2:30 hrs, and it even fails sometimes.
    Why is it using CPU only 15% ?
    The CPU is i7 920 @ 2.66 (NO overclocking), 6GB RAM, and WinXP 32-bit

    ImporterQT.bundle may be missing or corrupt
    uninstall Media Encoder using the package in \Applications\Utilities\Adobe Installers
    then reinstall.

  • Data corrupt block

    os Sun 5.10 oracle version 10.2.0.2 RAC 2 node
    alert.log 내용
    Hex dump of (file 206, block 393208) in trace file /oracle/app/oracle/admin/DBPGIC/udump/dbpgic1_ora_1424.trc
    Corrupt block relative dba: 0x3385fff8 (file 206, block 393208)
    Bad header found during backing up datafile
    Data in bad block:
    type: 32 format: 0 rdba: 0x00000001
    last change scn: 0x0000.98b00394 seq: 0x0 flg: 0x00
    spare1: 0x1 spare2: 0x27 spare3: 0x2
    consistency value in tail: 0x00000001
    check value in block header: 0x0
    block checksum disabled
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    corrupt 발생한 Block id 를 검색해 보면 Block id 가 검색이 안됩니다.
    dba_extents 로 검색
    corrupt 때문에 Block id 가 검색이 안되는 것인지 궁금합니다.
    export 받으면 데이타는 정상적으로 export 가능.

    다행이네요. block corruption 이 발생한 곳이 데이터가 저장된 블록이
    아닌 것 같습니다. 그것도 rman백업을 통해서 발견한 것 같는데
    맞는지요?
    scn이 scn: 0x0000.00000000 가 아닌
    0x0000.98b00394 인 것으로 봐서는 physical corrupt 가 아닌
    soft corrupt인 것 같습니다.
    그렇다면 버그일 가능성이 높아서 찾아보니
    Bug 4411228 - Block corruption with mixture of file system and RAW files
    의 버그가 발견되었습니다. 이것이 아닐 수도 있지만..
    이러한 block corruption에 대한 처리방법 및 원인분석은
    오라클(주)를 통해서 정식으로 요청하셔야 합니다.
    metalink를 통해서 SR 요청을 하십시오.
    export는 high water mark 이후의 block corruption을 찾아내지 못하고 이외에도
    아래 몇가지 경우에서도 찾아내지 못합니다.
    db verify( dbv)의 경우에는 physical corruption은 찾아내지 못하고
    soft block corruption만 찾아낼 수 있습니다.
    경험상 physical corruption 이 발생하였으나 /dev/null로
    datafile copy가 안되는데도 dbv로는 이 문제를 찾아내지
    못하였습니다.
    그렇다면 가장 좋은 방법은 rman 입니다. rman은 high water mark까지의
    데이터를 백업해주면서 전체 데이터파일에 대한 체크를 하기도 합니다.
    physical corruption뿐만 아니라 logical corruption도 체크를
    하니 점검하기로는 rman이 가장 좋은 방법이라 생각합니다.
    The Export Utility
    # Use a full export to check database consistency
    # Export performs a full scan for all tables
    # Export only reads:
    - User data below the high-water mark
    - Parts of the data dictionary, while looking up information concerning the objects being exported
    # Export does not detect the following:
    - Disk corruptions above the high-water mark
    - Index corruptions
    - Free or temporary extent corruptions
    - Column data corruption (like invalid date values)
    block corruption을 정상적으로 복구하는 방법은 restore 후에
    복구하는 방법이 있겠으나 이미 restore할 백업이 block corruption이
    발생했을 수도 있습니다. 그러므로 다른 서버에 restore해보고
    정상적인 datafile인 것을 확인 후에 실환경에 restore하는 것이 좋습니다.
    만약 백업본까지 block corruption이 발생하였거나 또는 시간적 여유가
    없을 경우에는 table을 move tablespace 또는 index rebuild를 통해서
    다른 테이블스페이스로 데이터를 옮겨두고 문제가 발생한 테이블스페이스를
    drop해버리고 재생성 하는 것이 좋을 것 같습니다.(지금 현재 데이터의
    손실은 없으니 move tablespace, rebuild index 방법이 좋겠습니다.
    Handling Corruptions
    Check the alert file and system log file
    Use diagnostic tools to determine the type of corruption
    Dump blocks to find out what is wrong
    Determine whether the error persists by running checks multiple times
    Recover data from the corrupted object if necessary
    Preferred resolution method: media recovery
    Handling Corruptions
    Always try to find out if the error is permanent. Run the analyze command multiple times or, if possible, perform a shutdown and a startup and try again to perform the operation that failed earlier.
    Find out whether there are more corruptions. If you encounter one, there may be other corrupted blocks, as well. Use tools like DBVERIFY for this.
    Before you try to salvage the data, perform a block dump as evidence to identify the actual cause of the corruption.
    Make a hex dump of the bad block, using UNIX dd and od -x.
    Consider performing a redo log dump to check all the changes that were made to the block so that you can discover when the corruption occurred.
    Note: Remember that when you have a block corruption, performing media recovery is the recommended process after the hardware is verified.
    Resolve any hardware issues:
    - Memory boards
    - Disk controllers
    - Disks
    Recover or restore data from the corrupt object if necessary
    Handling Corruptions (continued)
    There is no point in continuing to work if there are hardware failures. When you encounter hardware problems, the vendor should be contacted and the machine should be checked and fixed before continuing. A full hardware diagnostics should be run.
    Many types of hardware failures are possible:
    Bad I/O hardware or firmware
    Operating system I/O or caching problem
    Memory or paging problems
    Disk repair utilities
    아래 관련 자료를 드립니다.
    All About Data Blocks Corruption in Oracle
    Vijaya R. Dumpa
    Data Block Overview:
    Oracle allocates logical database space for all data in a database. The units of database space allocation are data blocks (also called logical blocks, Oracle blocks, or pages), extents, and segments. The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks allocated for storing a specific type of information. The level of logical database storage above an extent is called a segment. The high water mark is the boundary between used and unused space in a segment.
    The header contains general block information, such as the block address and the type of segment (for example, data, index, or rollback).
    Table Directory, this portion of the data block contains information about the table having rows in this block.
    Row Directory, this portion of the data block contains information about the actual rows in the block (including addresses for each row piece in the row data area).
    Free space is allocated for insertion of new rows and for updates to rows that require additional space.
    Row data, this portion of the data block contains rows in this block.
    Analyze the Table structure to identify block corruption:
    By analyzing the table structure and its associated objects, you can perform a detailed check of data blocks to identify block corruption:
    SQL> analyze table_name/index_name/cluster_name ... validate structure cascade;
    Detecting data block corruption using the DBVERIFY Utility:
    DBVERIFY is an external command-line utility that performs a physical data structure integrity check on an offline database. It can be used against backup files and online files. Integrity checks are significantly faster if you run against an offline database.
    Restrictions:
    DBVERIFY checks are limited to cache-managed blocks. It’s only for use with datafiles, it will not work against control files or redo logs.
    The following example is sample output of verification for the data file system_ts_01.dbf. And its Start block is 9 and end block is 25. Blocksize parameter is required only if the file to be verified has a non-2kb block size. Logfile parameter specifies the file to which logging information should be written. The feedback parameter has been given the value 2 to display one dot on the screen for every 2 blocks processed.
    $ dbv file=system_ts_01.dbf start=9 end=25 blocksize=16384 logfile=dbvsys_ts.log feedback=2
    DBVERIFY: Release 8.1.7.3.0 - Production on Fri Sep 13 14:11:52 2002
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    Output:
    $ pg dbvsys_ts.log
    DBVERIFY: Release 8.1.7.3.0 - Production on Fri Sep 13 14:11:52 2002
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE = system_ts_01.dbf
    DBVERIFY - Verification complete
    Total Pages Examined : 17
    Total Pages Processed (Data) : 10
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index) : 2
    Total Pages Failing (Index) : 0
    Total Pages Processed (Other) : 5
    Total Pages Empty : 0
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    Detecting and reporting data block corruption using the DBMS_REPAIR package:
    Note: Note that this event can only be used if the block "wrapper" is marked corrupt.
    Eg: If the block reports ORA-1578.
    1. Create DBMS_REPAIR administration tables:
    To Create Repair tables, run the below package.
    SQL> EXEC DBMS_REPAIR.ADMIN_TABLES(‘REPAIR_ADMIN’, 1,1, ‘REPAIR_TS’);
    Note that table names prefix with ‘REPAIR_’ or ‘ORPAN_’. If the second variable is 1, it will create ‘REAIR_key tables, if it is 2, then it will create ‘ORPAN_key tables.
    If the thread variable is
    1 then package performs ‘create’ operations.
    2 then package performs ‘delete’ operations.
    3 then package performs ‘drop’ operations.
    2. Scanning a specific table or Index using the DBMS_REPAIR.CHECK_OBJECT procedure:
    In the following example we check the table employee for possible corruption’s that belongs to the schema TEST. Let’s assume that we have created our administration tables called REPAIR_ADMIN in schema SYS.
    To check the table block corruption use the following procedure:
    SQL> VARIABLE A NUMBER;
    SQL> EXEC DBMS_REPAIR.CHECK_OBJECT (‘TEST’,’EMP’, NULL,
    1,’REPAIR_ADMIN’, NULL, NULL, NULL, NULL,:A);
    SQL> PRINT A;
    To check which block is corrupted, check in the REPAIR_ADMIN table.
    SQL> SELECT * FROM REPAIR_ADMIN;
    3. Fixing corrupt block using the DBMS_REPAIR.FIX_CORRUPT_BLOCK procedure:
    SQL> VARIABLE A NUMBER;
    SQL> EXEC DBMS_REPAIR.FIX.CORRUPT_BLOCKS (‘TEST’,’EMP’, NULL,
    1,’REPARI_ADMIN’, NULL,:A);
    SQL> SELECT MARKED FROM REPAIR_ADMIN;
    If u select the EMP table now you still get the error ORA-1578.
    4. Skipping corrupt blocks using the DBMS_REPAIR. SKIP_CORRUPT_BLOCK procedure:
    SQL> EXEC DBMS_REPAIR. SKIP_CORRUPT.BLOCKS (‘TEST’, ‘EMP’, 1,1);
    Notice the verification of running the DBMS_REPAIR tool. You have lost some of data. One main advantage of this tool is that you can retrieve the data past the corrupted block. However we have lost some data in the table.
    5. This procedure is useful in identifying orphan keys in indexes that are pointing to corrupt rows of the table:
    SQL> EXEC DBMS_REPAIR. DUMP ORPHAN_KEYS (‘TEST’,’IDX_EMP’, NULL,
    2, ‘REPAIR_ADMIN’, ‘ORPHAN_ADMIN’, NULL,:A);
    If u see any records in ORPHAN_ADMIN table you have to drop and re-create the index to avoid any inconsistencies in your queries.
    6. The last thing you need to do while using the DBMS_REPAIR package is to run the DBMS_REPAIR.REBUILD_FREELISTS procedure to reinitialize the free list details in the data dictionary views.
    SQL> EXEC DBMS_REPAIR.REBUILD_FREELISTS (‘TEST’,’EMP’, NULL, 1);
    NOTE
    Setting events 10210, 10211, 10212, and 10225 can be done by adding the following line for each event in the init.ora file:
    Event = "event_number trace name errorstack forever, level 10"
    When event 10210 is set, the data blocks are checked for corruption by checking their integrity. Data blocks that don't match the format are marked as soft corrupt.
    When event 10211 is set, the index blocks are checked for corruption by checking their integrity. Index blocks that don't match the format are marked as soft corrupt.
    When event 10212 is set, the cluster blocks are checked for corruption by checking their integrity. Cluster blocks that don't match the format are marked as soft corrupt.
    When event 10225 is set, the fet$ and uset$ dictionary tables are checked for corruption by checking their integrity. Blocks that don't match the format are marked as soft corrupt.
    Set event 10231 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing full table scans:
    Event="10231 trace name context forever, level 10"
    Set event 10233 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing index range scans:
    Event="10233 trace name context forever, level 10"
    To dump the Oracle block you can use below command from 8.x on words:
    SQL> ALTER SYSTEM DUMP DATAFILE 11 block 9;
    This command dumps datablock 9 in datafile11, into USER_DUMP_DEST directory.
    Dumping Redo Logs file blocks:
    SQL> ALTER SYSTEM DUMP LOGFILE ‘/usr/oracle8/product/admin/udump/rl. log’;
    Rollback segments block corruption, it will cause problems (ORA-1578) while starting up the database.
    With support of oracle, can use below under source parameter to startup the database.
    CORRUPTEDROLLBACK_SEGMENTS=(RBS_1, RBS_2)
    DB_BLOCK_COMPUTE_CHECKSUM
    This parameter is normally used to debug corruption’s that happen on disk.
    The following V$ views contain information about blocks marked logically corrupt:
    V$ BACKUP_CORRUPTION, V$COPY_CORRUPTION
    When this parameter is set, while reading a block from disk to catch, oracle will compute the checksum again and compares it with the value that is in the block.
    If they differ, it indicates that the block is corrupted on disk. Oracle makes the block as corrupt and signals an error. There is an overhead involved in setting this parameter.
    DB_BLOCK_CACHE_PROTECT=‘TRUE’
    Oracle will catch stray writes made by processes in the buffer catch.
    Oracle 9i new RMAN futures:
    Obtain the datafile numbers and block numbers for the corrupted blocks. Typically, you obtain this output from the standard output, the alert.log, trace files, or a media management interface. For example, you may see the following in a trace file:
    ORA-01578: ORACLE data block corrupted (file # 9, block # 13)
    ORA-01110: data file 9: '/oracle/dbs/tbs_91.f'
    ORA-01578: ORACLE data block corrupted (file # 2, block # 19)
    ORA-01110: data file 2: '/oracle/dbs/tbs_21.f'
    $rman target =rman/rman@rmanprod
    RMAN> run {
    2> allocate channel ch1 type disk;
    3> blockrecover datafile 9 block 13 datafile 2 block 19;
    4> }
    Recovering Data blocks Using Selected Backups:
    # restore from backupset
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM BACKUPSET;
    # restore from datafile image copy
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM DATAFILECOPY;
    # restore from backupset with tag "mondayAM"
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 199 FROM TAG = mondayAM;
    # restore using backups made before one week ago
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
    UNTIL 'SYSDATE-7';
    # restore using backups made before SCN 100
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE UNTIL SCN 100;
    # restore using backups made before log sequence 7024
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
    UNTIL SEQUENCE 7024;
    글 수정:
    Min Angel (Yeon Hong Min, Korean)

  • Nokia 6303 restart after deleting photo + media pl...

    I have a problem with a nokia 6303.
    After taking a photo and deleting it -> I pressed options and the phone restarted
    Is it a software bug? or?
    At the Menu media.. if I go to the Media player and there is no song "preloaded"
    (ie a previous listened) the media player stucks.. the phone has to be restarted
    to be able to play music again. This seems like a sofware but... I hope it will be fixed soon.
    Also the phone makes a background noise when the screen is light up

    milarepa wrote:
    I have a problem with a nokia 6303.
    After taking a photo and deleting it -> I pressed options and the phone restarted
    Is it a software bug? or?
    At the Menu media.. if I go to the Media player and there is no song "preloaded"
    (ie a previous listened) the media player stucks.. the phone has to be restarted
    to be able to play music again. This seems like a sofware but... I hope it will be fixed soon.
    Also the phone makes a background noise when the screen is light up
    Photo Problem:  When you opened the picture and then deleted it , the picture was still in memory and when the menu tried to go back one step the picture was no longer there so reset the phone.  Bug? Don't know, I would suggest that if you want to delete files do so in list view not whilst viewing the file you want to delete.  If you tried to do this in windows it would give you an error messaeg something like "this file is in use"
    Media PLayer Problem:  This could be down to Software, it could also be down to a memory card read error.
    Try taking card out and using a memory card reader check that all files are readable.  If one of the files is corrupt the media player may be trying to look for files to add to playlist and when it gets to the corrupt one bombs out.
    Gadget
    Message Edited by gadgetdude on 05-Jun-2009 03:48 PM
    Remember to mark all correctly answered questions as Solved. A forum is only as great as the sum of its parts, together we will prevail.

  • Syncing problems when using multiple iPods on the same computer?

    I own three iPods. An 8GB nano, a 16GB touch, and a 64GB touch.
    iTunes seems to have no problems with the nano, but it seems to have issues from time to time with the two touch iPods. "Syncing" sometimes becomes "screwing up" in the form of deleting content that I have purchased.
    My latest attempt to transfer a purchase (Toy Story 3) from my 64GB touch resulted in a "sync" which has basically corrupted my media libraries. I can see on iTunes that the media is still on my iPod, and even play it within iTunes from my iPod; when I disconnect my iPod from my computer and attempt to access the same media on the iPod by itself, both my music and my video libraries are showing up blank.
    Has anyone else had the same problem?
    NOTE: My 64GB iPod touch is not the latest version; it doesn't have the HD camera, but it does have the mic input. My 16GB touch is even older, having neither the camera nor the mic. The 16GB touch is an even bigger mess, with app icons frequently appearing as big grey blocks instead of the pictures they're supposed to be.
    I like my 3 iPods. I like iTunes. I like Apple.
    I just don't like it when things stop working the way they're supposed to.
    HELP!

    Welcome, Brandon, to the Apple Discussions!
    The answer to your question should be found here:
    http://docs.info.apple.com/article.html?artnum=300432
    And if at some time in the future you decide you'd like to use an iPod on more than one computer, have a look at:
    http://docs.info.apple.com/article.html?artnum=61675

  • TS 140 Windows Server 2012 R2 install failure error 0x80070570

    Recently bought 2 TS140 70A4001PUX servers with Xeon E3 1245 v3 attempting to install eval version of server 2012 R2 (latest on site April refresh) and keep getting the following error at around 62% in the getting files ready for installation:
    Windows can not install required files.  The file may be corrupt or missing.  Make sure all files required for installation are available, and restart the installation.  Error code: 0x80070570
    I have seen web search results for various MS OSes that indicate this is due to bad memory, corrupt installation media or download, Nic cards, BIOS settings etc. 
    To rule out memory ran memtest for hours with no errors against the crucial 32GB mem 2 (CT2KIT102472BD160B) also tried installing against the 4GB mem that came with the system. I think I have done enough to rule out the memory.
    Installation media, I downloaded the ISO from Microsoft's eval site three different times using Firefox, Chrome and IE burned the image to DVD 2 different times and tried numerous times using Rufus on an 8GB USB stick in different USB ports. I’ve tried installing the datacenter version with both GUI and server core options. Next thing I’m going to try is to install Server 2012 from an older ISO that I have used successfully many times in the past will update later though I would prefer to install fresh from R2.
    I also have the Lenovo 1GBPS Ethernet I350-T2 Server adapter by Intel for ThinkServer. I have tried installing with it inserted and with it removed, also with nothing even disabling the on board Nic in the BIOS. Same result.
    The SSD drive I have is the Samsung 850 Pro 128GB. After the failure I am able to mount this drive on another machine and see that a significant amount of files get copied to it so I ruled out the sata cable. I also tried with another no SSD WD HD with the same result.  I even break out at install run dispart and clean the partitions.
    Excerpts from setupact.log on SSD:
    2015-03-24 13:58:37, Info       [0x0606cc] IBS   Calling WIMApplyImage (flags = 0x180)...
    [800] [UncompressFile994) -> file corrupted in block at offset 000000541283B3F2] G:\Windows\System32\DriverStore\FileRepository\net​elx.inf_amd64_82d20ebbcbf8b5af\netelx.PNF (Error = 1392)
    2015-03-24 13:59:54, Error     [0x0606cc] IBS   WIMApplyImage failed; hr = 0x80070570[gle=0x00000570]
    2015-03-24 13:59:54, Error     [0x0600a1] IBS   DeployImage:Image application failed; hr = 0x80070570[gle=0x00000057]
    For BIOS settings I have reverted back to defaults tried IDE instead of AHCI and also tried CSM support enabled and disabled, NIC both enabled disabled.
    Any suggestions would greatly be appreciated.
    Thanks.
    Solved!
    Go to Solution.

    In case anyone is interested the Server 2012 R@ eval ISO I had issues with is the following file:
    9600.17050.WINBLUE_REFRESH.140317-1640_X64FRE_SERV​ER_EVAL_EN-US-IR3_SSS_X64FREE_EN-US_DV9.ISO
    I am also reporting this to MS.

  • Windows 7 Install Problem

    I have a Lenovo r61i Thinkpad.  The optical drive was bad so I replaced it with a MAT**bleep**A DVD-RAM UJ-862.  It is thinner than the original drive but the connector is the same and if I add a shim to hold it up it connects.  I believe this is the optical drive used in the T51.
    I am running Linux Mint 16 and am trying to install Windows 7 on a virtual machine with Virtualbox.  The install starts,  but, after a little while I get this error message:
    "A required CD/DVD drive device driver is missing. If you have a driver floppy disk, CD, DVD, or USB flash drive, please insert it now"
    When I put the CD for this drive  that containst the drivers etc. that came with the drive the installer scans it but the driver it is looking for is not there. This is actually a brand new drive, however, it is old. I imagine it may not have a Windows 7 driver. Why Windows 7 doesn't have a driver I don't know.
    When I go the the Lenovo site for a Windows 7 optical driver for the T61 (because this is the Thinkpad that this drive is for) there isn't one listed.
    I when I look around on the web I find different driver download managers, but, I've noticed that the last time I used Windows for any length of time that there were all sorts of formerly reliable sites hosting files that sort of misled you into using their download manager that would install other software and  browser extention malware.  For this reason and the fact that the computer I am using is Linux I chose not to get this.
    Does anyone have any idea what the problem might be.  My install disk is an official disk purchased from a mainline retail store.
    Does anyone have any suggestions what the problem is and how to fix it?
    TIA

    Hi,
    That error message is often misleading.  It usually means corrupted install media - bad DVD, bad download, bad burn, bad USB media creation.   Sometimes it is because a USB install device (flash or optical drive) should be plugged into a USB 2.0 socket instead of USB 3.0.
    What it means when installing from DVD into a VM, I have no idea - just that it probably isn't a missing driver.
    You might try copying the DVD and see if it throws an error during the copy.  If it doesn't, try installing from the new DVD.  If it does throw an error then you have a bad original.
    You could also try installing from an ISO.  Google "legal Windows 7 download notebook review".
    Z.
    The large print: please read the Community Participation Rules before posting. Include as much information as possible: model, machine type, operating system, and a descriptive subject line. Do not include personal information: serial number, telephone number, email address, etc.  The fine print: I do not work for, nor do I speak for Lenovo. Unsolicited private messages will be ignored. ... GeezBlog
    English Community   Deutsche Community   Comunidad en Español   Русскоязычное Сообщество

  • Final Cut Pro 7.03 crashes, impossible to work with the system

    Hi,
    I have a mac pro with the following configuration:
    *2,4 Ghz westmere octa (mac pro 5,1)
    *24Gb Ram in 6x4gb FCM 1066MHz modules
    *DeckLink HD Extreme 3D video card
    *ATI Radeo HD 5770
    *NVIDEO Quattro 4000
    *Tempo Sata e2p esata controller
    I work a lot in final cut pro, but it crashes like every 5 minutes or even faster, sometimes i runs for 10 minutes without a crash. That is terrible and that 10k workstation is not usable...
    I already formatted the harddisk, re set up osx and fcs, installed every update for every part of my workstation but cannot fix it.
    Especially viewing clips as thumbnail galleries (video previews) and working with timelines where you have those little video thumbnails on every cut in the beginning of the clip and scrubbing around there leads to crashes.
    I mostly work on xdcam422 material at 220mbit and 280mbit recorded with the nanoflash, but also in prores422hq.
    Anyone an idea on how to fix this?
    Thanks a lot!

    Hi,
    Here the Maintainance Pack Log:
    This crash log suggests that Final Cut Pro was overloaded at the time of the crash. This can occur if your project is overly complex, has multiple Motion project layers or is using MPEG-based media (HDV, H.264, MPEG-2, XDCAM, AVCHD), particularly if you are using filters with such media. Note that overloading is more likely to be the cause of the crash than the reason below.
    Corrupt / Unsupported Media
    This crash was caused by corrupt media or media that is unsupported by Final Cut Pro. A common cause of this crash is using media on your timeline that is unsuitable for editing such as AVI, WMV or MPEG-derived codecs (XDCAM, HDV, MPEG-2, H.264).
    Suggested Actions
    Use Corrupt Clip Finder to locate the problematic clip and then recapture or attempt to repair it.
    If your sequence codec is set to an MPEG-based codec such as MPEG-2, HDV, XDCAM or H.264, change it to ProRes.
    Alternatively, and for best results, convert any MPEG-based media to ProRes.
    If the issue is related to a codec, upgrade the codec if an upgrade is available or consider transcoding your footage to an editing-friendly codec such as ProRes.
    If you are using CMYK images, convert them to RGB.
    More Information
    To improve the stability of your system, create a less complex "lite" version of your project, export embedded Motion projects as QuickTime movies and if you are using MPEG-based media such as H.264, HDV and XDCAM, set your sequence codec to ProRes or convert the media to the ProRes codec.
    Relevant Line
    0 ...ickTimeComponents.component 0x97395a1c FindAligningPiece + 274

  • Final Cut Pro 7.0.3 XDCAM Issues

    Having problems with Final Cut Pro 7.0.3 on a feature with XDCAM EX 1080p 24p
    Using Sony cinemon plug-in to work natively with .mp4 files and Matrox mini MX02 on 2.2ghz quad i7 17inch macbook pro with 8gb ram and 2nd external monitor via mini-display port.
    Media is located on external drive connected by firewire 800.
    Timeline is playing at high quality with various repositions, 3-way color correction and 2.35 .png matte on top layer.
    Crash analyser presents multiple reasons for crashes. 5 today ranged from:
    QuickTime
    This crash was caused by QuickTime.
    Possible causes include corrupt media, corrupt render files, an unsupported or unstable codec, an image resolution that is too large (this is particularly a problem with Motion as image sizes are limited by the maximum texture size of your graphics card), or a version of QuickTime that is incompatible with your version of Final Cut Studio.
    Corrupt / Unsupported Media
    This crash was caused by corrupt media or media that is unsupported by Final Cut Pro. A common cause of this crash is using media on your timeline that is unsuitable for editing such as AVI, WMV or MPEG-derived codecs (XDCAM, HDV, MPEG-2, H.264).
    MPEG Media
    This crash appears to have been caused by MPEG media (or MPEG-derived media such as XDCAM, HDV or H.264) in the timeline. MPEG-based media is not recommended for editing in Final Cut Pro.
    Effects Rendering
    This crash appears to have occurred while trying to render effects on a clip. Possible causes for this include corrupt or unsupported media, a bad filter, a graphics card issue or too many filters applied to a single clip.
    All of these crashes also listed -
    This crash log suggests that Final Cut Pro was overloaded at the time of the crash. This can occur if your project is overly complex, has multiple Motion project layers or is using MPEG-based media (HDV, H.264, MPEG-2, XDCAM, AVCHD), particularly if you are using filters with such media. Note that overloading is more likely to be the cause of the crash than the reason below.
    I feel like this is coming from the XDCAM media and want to know if there is anyone who has dealt with a similar problem and found a cause or workaround.
    If any additional information is needed, please let me know and I'll provide any information I can.
    -Fraunpetri

    Shane,
    Thanks for your response. When I was setting up this project, I asked around to multiple forums and professionals and many seemed to think that using native XDCAM EX footage was possible and that the newer systems would not have many issues working with it.
    Is there anyway I can tweak my setup to help alleviate pressure on the laptop? If I didn't use the Matrox bob or was able to remove the 2.35 .png matte from the timeline, or possibly changed to medium quality for video in the timeline.
    Transcoding to pro res does seem like it would help the situation but at this stage unfortunately that's not an option.
    Any advice would be greatly appreciated.
    -Fraunpetri

  • Another one step forward, one step back post

    Just noticed this on CS6 (Mac):
    After I work on a project, I use ChronoSync to backup my project folder (which contains all my media and other assets besides caches) to another drive, which is an interim backup until LTO.
    Now, ChronoSync is re-writing ALL my media to the back up disk.  This is a big deal when I have 120GB of media in a project, which isn't unusual.
    What was taking a few seconds or minutes is now taking hours.
    I can see that Pr is changing the modification date on my media when I open (or maybe save) a project.  This is what ChronoSync is looking at to determine whether to backup that file or not.
    I'm not changing the media (not intentionally anyway).  Why the change in the way Pr works?  I don't see an advantage.  Can anybody (especially from Adobe) provide some insight?
    I'm guessing that Pr is adding some metadata to my source files, yes?  Is this necessary?  Can I turn it off? 
    If Pr is modifying my sources, doesn't this risk corrupting my media files?

    Thanks for the longer description of your workflow. 
    A couple of notes:
    As far as the ingesting of footage coming off of cards or HDs from the shoot, I use an app designed especially for this called Shotput Pro.  It is designed to ingest footage, append filenames, copy to multiple locations and verify the data.  If you haven't already, check it out - works great on location for a smooth data process.
    Time Machine will work in the same way of only doing incremental backups of what has changed since the last scan.  I guess in this way, you would come up against the same issue of the changed modification dates causing havoc.
    I use Time Machine to back up everything except for the shoot reels, which would quickly fill my TM drive.  After ingesting all of the footage into my file structure, I will copy the Shoot Reels to a Drobo drive where it will be protected as a backup outside of Time Machine.  Since the original media isn't being modified over the course of an edit, I can always retrieve it and relink the files if my HD that I edit from blows up.  (Not to delve too deep into a backup discussion, but if it is really sensitive footage, i will make another copy and store it in my safe deposit box at the bank)  So, that leaves all of the other files that do get backed up to TM hourly, which are generally much much smaller. 
    I also like to leave everything related to one project in 1 folder.  Your note about not wanting to have to think about exluding folders from backup is something I share - however, I do have to manually tell TM not to backup the shoot reel folder for each project I create.  I haven't found an automated way of doing this yet.
    I can be more specific about how I organize my project folders and how it fits into a backup workflow, but I don't think that is what you are looking for since you already have your workflow down (with the exception of this wrench Adobe threw into the works...) Only if you are interested i will share my system.
    To your original question - I agree that the way CS6 modifies all of your footage mod dates is annoying and I wish i had an answer for you.

  • Pages repeatedly crashing due to "SFWord Processing Plugin"

    I'll be working in Pages '08 and click on something (sometimes in the menu bar, sometimes in the text, sometimes in the Inspector--doesn't seem to follow a real pattern), and all the sudden I will get the spinning beach ball. I won't be able to save. Other programs become unusably sluggish. After about 10 minutes of beach-ball spinning, Pages crashes with a message indicating that the probable reason is "SFWord Processing Plugin."
    This has been going on all day. Add to this frustration that Pages has not autosave and I am losing my docs left and right. I have been forced to start typing in TextEdit and not happy about it.
    Any ideas what SFWord Processing Plugin is and how I can fix the problem?
    Thanks.

    If you have the same problem on six Macs, I doubt very much that there is any problem with the cache.
    It is likely that you have installed something on all of them that does not work properly - or at least does not work properly with Pages.
    If you know you have installed something unusual on all six of them, I would look into uninstalling that on one of them, and see if that makes any difference. It could be an application, a font, a widget, a movie or image that corrupts the media browser - just about anything, but it exists on all six of them.
    My guess would be the printer driver. It is likely that all of them has at least one printer in common, and it is possible that that one does not play well with Pages. If you can spare one Mac, try removing all printers for a few days on that one, and see if Pages still behaves funny.
    I am in no way a printer guru myself - I do not even have own a printer any more. But I know that other people here have printers.
    Oh, btw, if you anyhow want to know about the system caches, you find at least some of them by typing the following two commands in the Terminal:
    <pre>
    cd `getconf DARWINUSER_CACHEDIR`
    open ..
    </pre>

  • Trouble with database connector for FAST for Sharepoint

    I have plenty of experience with the JDBC connector for FAST ESP using Oracle, but this is the first experience I have with the same connector for FAST for Sharepoint (F4SP).  I've created the XML config exactly to specs based on the technet guides
    and configured my user accounts on the SQL Server appropriately.  I'm using SQL Server 2008 R2.
    If I try to use a Windows account, no level of access granted on the SQL Server results in a successful login.  Any attempt to use the connector is met with "Login failed for user:  domain\username".  If I try a SQL Server account, I no longer
    see a login failure, but this instead:
    PS C:\FASTSearch\bin> jdbcconnector start -f ..\etc\DTtest.xml
    Copyright (c) Microsoft Corporation.  All rights reserved.
    14:44:59,870 INFO  [JDBCConnector] Starting the connector!
    14:44:59,870 INFO  [JDBCConnector] Validating config.......
    14:45:00,136 INFO  [JDBCConnector] Testing connections to external systems
    14:45:00,198 INFO  [JDBCConnector] Checking if connections to source and target work....
    14:45:00,620 INFO  [JDBCAdapter] Opened JDBC Connection
    14:45:00,620 INFO  [JDBCConnector] Connection made to source system
    14:45:00,620 INFO  [CCTKDocumentFeeder] Publisher :Initializing: com.fastsearch.esp.cctk.publishers.CCTKDocumentFeeder
    14:45:00,620 ERROR [JDBCConnector] Failed creating publisher. Test connection failed.
    14:45:00,620 ERROR [JDBCConnector] Caused by: Unable to create status tracker.
    14:45:00,620 ERROR [JDBCConnector] Caused by: Could not connect to database. Make sure TCP/IP is enabled for SQLServer.
    14:45:00,620 ERROR [JDBCConnector] Caused by: No suitable driver found for ;integratedSecurity=true
    14:45:00,620 INFO  [JDBCConnector] Connection made to target system
    TCP/IP is certainly enabled for SQL Server -- I am connecting to it with Sharepoint 2010, not to mention a test install of Toad Freeware for SQL Server from another machine, resulting in no errors.  My connection string, username and password are all
    set up according to the technet guide, including the encrypted password set by the encryption utility run the in F4SP Powershell.  The same user who runs the connector also ran the encryption tool, so that should not be a problem either.
    I've never had this kind of trouble connecting to Oracle databases with the JDBC connector for FAST ESP.  Any suggestions?  These failures aren't exactly helpful to me.  Thanks in advance.

    I found the solution to this problem in case anyone else was still curious.
    This was caused not by JDBC settings, permissions/access on SQL Server, etc.  It was caused simply by a missing SQL Server in F4SP configuration.  When you run the F4SP config wizard, either as single-node or multi-node installations, the wizard
    might end up ignoring the value you select for SQL Server, even if you are using a deployment XML file.  When configuring as a multi-node setup, the wizard ignored both my deployment XML tag <connector-databaseconnectionstring> and the SQL Server
    setting done manually in the wizard itself.  Examing the evidence, I suspect this was caused by corrupt installation media (since the installer also failed to create some core files in /etc upon doing a reinstall).
    The bottom line is that you must see a value for SQL Server in your install_info.txt file in order for the JDBC connector to work with SQL Server, like this:
    Other services
    Log Server:             myserver.mydomain.com:13415
    SQL Server database:    jdbc:sqlserver://myserver.mydomain.com;DatabaseName=FASTSearchAdminDatabase;integratedSecurity=true
    If you see this instead, the connector will never work:
    So, the JDBC connector is certainly dependent on the main configuration of F4SP, regardless of what SQL Server is targeted in your connector's XML configuration.
    Other services
    Log Server:             myserver.mydomain.com:13415
    SQL Server database:   
    ;integratedSecurity=true
    I had no idea there was a connection between my install_info setup and the use of the JDBC connector.  But, the connection makes sense considering this was the error message from the JDBC connector:
    09:20:04,016 ERROR [JDBCConnector] Caused by: No suitable driver found for ;integratedSecurity=true

  • Opening samples in theaveform editor seems to be crashing Soundtrack Pro...

    When i try to open a sample in the waveform editor, by double clicking it in the multitrack timeline window, it crashes Soundtrack pro. I have tried exporting from Final Cut Pro in several different ways to try and get around earlier media and crashing issues, and now i export all audio as a multitrack project, then compress a quicktime movie of the video, and just drop it in. This has worked well up until now.
    However now every time i try open a sample it just crashes...
    i also want to know how to avoid wrecking the original FCP media files as i edit in STP...despte saving original audio sources, i seem to be corrupting source media as i edit. ****. any suggestions appreciated.

    I had a similar problem once, and deleteing my preferences files fixed it. I posted the files I deleted in a different thread... Here they are:
    <User Folder>/Library/Application Support/Soundtrack Pro/EffectsCache.plist
    <User Folder>/Library/Application Support/Soundtrack Pro/Layouts/Default.moduleLayout
    <User Folder>/Library/Preferences/com.apple.soundtrackpro.plist
    You might make a backup of the files first, just in case. Or simply rename them where they are.

Maybe you are looking for

  • File Input Stream problem

    i have a text file that i want to read to be my input into my database in db4o. i was looking through the websites. like http://java.sun.com/javase/6/docs/api/java/io/FileInputStream.html and http://www.java2s.com/Code/JavaAPI/java.io/FileInputStream

  • How to find out what attributes were really changed in AD?

    It it possible to find out what changes were really made in Active Directory resource? Example: I set two attributes: 'groups' and 'company'. One of groups doesn't exist or user(specified in resource configuration) dosn't have permission to add this

  • Oracle RAC 11g setup

    Hi all, I have 2 server class machine but not having separate disk array. In both server machine i have local disk array with 3 Gbps Reading/Writing speed. Can i use one local disk array as shared storage for both server? If anybody have idea please

  • SAP PM stand-alone with Baan ERP ?

    Does anyone have experience with implementing SAP PM module with Baan ERP ? I don't think that this is a good idea at all, but I just wanted to check if someone already did it, and how much it did cost. Many thanks for your feedback. Best regards, He

  • HTTP-Tunneling through Apache Plug-in

    Hello, has anybody experience with HTTP-Tunneling of requests to a WLS 4.5.1SP13 through an Apache-Webserver? I'm not able to configure the apache plug-in from weblogic to act as a reverse proxy for requests coming from a Java Client Application. Any