WRT160NL copyin Big files on storage wil not succeed

When i put a big file like a 7 GB (iso image) on my harddisk which is connected at my router, the connection breaks down in the middle of the copying process. I tried two different hardisks but same problem, so i fugures it has to do with the harddisk. Smaller files are no problem. Is there a timed out setting or something?
Who can help me?
Solved!
Go to Solution.

Actually 7GB is a large amount of data..However try to change the Wireless Chanel on the router and check..
Which security type you are using on the router..? It is recommended to use WPA2 Security to get the proper speed from WRT160NL router.

Similar Messages

  • How to remove a hard disk file on storage

    Hello,
    In my process, I would like to remove an additional disk.
    To achieve that task, I have found "Remove VM device". It is a VMware box that I can use to remove any device (including Hard disks) and I have not found another box to do that.
    But file on storage is not deleted, only the logical disk on the VM is removed.
    So I was wondering if there could be another way to remove also the file on the storage?
    Is a “dedicated” remove Hard disk vmware box exists in Tidal versions greater to 2.3.1?
    thank you.
    Best regards,
    Nicolas

    There is no specific "Remove Hard Disk" activity in 2.3.1 or the upcoming 2.3.4.
    Have you tried the PowerCLI command "Remove-HardDisk"? If that works for you, in 2.3.1 you can use a Windows PowerShell activity to run PowerCLI and execute a simple script that runs this command against the VM. Unfortunately, this requires connecting to vCenter directly (not through the adapter) and you will need to supply credentials.
    Or, better, use the "Execute PowerCLI Script" activity in the upcoming 2.3.4 VMware Adapter.. where you can run PowerCLI activities that use your existing VMware vCenter Server target's connection and don't need supply credentials separately.

  • Converting .mov files for storage

    I am recording 10-15 min .mov files, and each is around 3 GB. I was wondering if there was a way to compress the files (like a .zip or something) that will create a smaller file for storage but not diminish either video or audio quality for playback, or editing. We will be using this files for research on infant cognition. Sorry if this is a stupid question, I am learning as I go...

    Compression always loses data but compressing to H264 with a suitably high bitrate will leave you with a quite high quality video scalable to full screen. I assume these are DV files based on size. You won't have the same quality so use an external hard drive and keep them as is or since they must be DV export from iMovie or Final Cut to a DV tape in a DV camera, an hour stored is identical to the source and at $5 a tape darn cheap storage.

  • Not enough space on my new SSD drive to import my data from time machine backup, how can I import my latest backup minus some big files?

    I just got a new 256GB SSD drive for my mac, I want to import my data from time machine backup, but its larger than 256GB since it used to be on my old optical drive. How can I import my latest backup keeping out some big files on the external drive?

    Hello Salemr,
    When you restore from a Time Machine back up, you can tell it to not transfer folders like Desktop, Documents. Downloads, Movies, Music, Pictures and Public. Take a look at the article below for the steps to restore from your back up.  
    Move your data to a new Mac
    http://support.apple.com/en-us/ht5872
    Regards,
    -Norm G. 

  • SMB File Share Storage Failover Cluster "path is not valid folder path"

    I am having an issue that I am scratching my head over. So I have setup a 3 node Hyper-V host cluster that I am attempting to use SMB File Share storage for the shared storage medium. I have been trying to migrate a virtual machine from one of the nodes
    local storage into the file share using System Center VMM 2012 R2 with UR5 however I keep getting the message "The specified path is not a valid folder path on node2.domain.com" for the storage that VMM automatically selects for placement of files
    from the migration. However what is odd is to the same file share for storage I can do a new deployment of a virtual machine to the same cluster with the same shares; I can also delete a virtual machine from the share fine; the file share for the virtual machine
    I library I am deploying machines from is the same server as the file shares for the cluster that I am deploying to so maybe that is why that succeeds; if I try to move the recently deployed VM from one file share to another for storage same error comes up.
    The three nodes all reference the same file server, which is just one file server for the storage, and the shares were created in VMM and as such the file share permissions were setup by VMM so they should be sufficient. I have also attempted this with both
    delegation of CIFS and without through AD (trust to specified computers with CIFS, Hyper-V Replica and Microsoft Virtual Console Service via Kerberos only).
    I am stumped as to what to check next or how to get this working and would appreciate any guidance anyone can give towards a resolution for this problem.

    Jeff,
    Thanks for reporting this.
    There's an known issue with UR5 that VMM gives wrong error when deploying a HAVM onto a SMB share. In that situation, VMM complains "Invalid folder path".
    To confirm you are hitting the same issue, would you kindly let me know:
    1. Are you trying to migrate a HighlyAvailable VM onto the SMB share?
    2. On UI of "Migrate Storage Wizard", if you click "Browse", do you see the target file share showing up in "Select Destination Folder" dialog?
    3. By "the three nodes all reference the same file server", I assume you added the target file share onto the list of File Share Storage thru the cluster's properties UI. If so, please go to the UI and check whether the access status shows
    green there.
    4. By "local storage", do you mean "local disk"? Or a shared LUN from any array? I assume it's not "available storage" or "CSV" of the cluster. Please double confirm.
    Note: If we are sure hitting the known issue, I will later re-direct you to a hotfix. But let's make sure it is the issue first.
    Look forward to your reply.

  • How do i open a VERY big file?

    I hope someone can help.
    I did some testing using a LeCroy LT342 in segment mode. Using the
    Labview driver i downloaded the data over GPIB and saved it to a
    spreadsheet file. Unfortunately it created very big files (ranging from
    200MB to 600MB). I now need to process them but Labview doesn't like
    them. I would be very happy to split the files into an individual file
    for each row (i can do this quite easily) but labview just sits there
    when i try to open the file.
    I don't know enough about computers and memory (my spec is 1.8GHz
    Pentium 4, 384MB RAM) to figure out whether if i just leave it for long
    enough it will do the job or not.
    Has anyone any experience or help they could offer?
    Thanks,
    Phil

    When you open (and read) a file you usually move it from your hard disk (permanent storage) to ram.  This allows you to manipulate it in high speeds using fast RAM memory, if you don't have enough memory (RAM) to read the whole file,  you will be forced to use virtual memory (uses swap space on the HD as "virtual" RAM) which is very slow.  Since you only have 384 MB of RAM and want to process Huge files (200MB-600MB) you could easily and inexpensively upgrade to 1GB of RAM and see large speed increases.  A better option is to lode the file in chunks looking at some number of lines at a time and processing this amount of data and repeat until the file is complete, this will be more programming but will allow you to use much lass RAM at any instance.
    Paul
    Paul Falkenstein
    Coleman Technologies Inc.
    CLA, CPI, AIA-Vision
    Labview 4.0- 2013, RT, Vision, FPGA

  • OVM Manager 3.1.1 CLI clone big files error

    Hello,
    We use ovm manager cli for backup purposses. During a clone operation of a big file after about 10 to 12 min. we get an error:
    Error Msg: com.oracle.odof.exception.PermissionException: Exchange is not connected
    But the clone operation is being continued successfully_.
    So, from client point of view operation failed, but for server succeded.
    Clone command is as follows:
    ssh admin@ovmm -p 10000 "clone VirtualDisk id=$VM_FILE_ID target=$VM_FILE_REPO_ID cloneType=Sparse"
    The whole message:
    Command: clone VirtualDisk id=0004fb00001200004d032c969c42095d.img target=0004fb000003000072340b1e2eb70904 cloneType=Sparse
    Status: Failure
    Time: 2013-01-18 14:42:58.955
    Error Msg: com.oracle.odof.exception.PermissionException: Exchange is not connected
    Fri Jan 18 14:42:58 CET 2013
    OVM> Failed to complete command(s), error happened. Connection closed.
    We blamed timeouts, but after setting higher values nothing has changed.
    Thank you in advance
    Gregory

    This really sounds like this nasty timeout issue, which had its first appearances in the early builds of OVM 3.0.3 and prevented e.g. the creation of rather large storage repositories over iSCSI when using a standard 1 GBit connection. The command would simply timeout on the OVMM after 120 secs, rather than waiting for the ovs-agent to report back…
    In OVM 3.1.1 (368) this timeout had been upped to 10 mins, but I knew that this only would last for months… and I told Oracle Support so, but eh… you know… ;)

  • Lots of -8008 errors on iTunes 11 OS X 10.7.5 and files say iTunes can not play with with this version of Quicktime

    So my iTunes library is to big for local storage and I'm hosting my iTunes library on a NAS box, worked 90% of the time, some errors here and there but it was tolerable.  Since I've update to iTunes 11 most of my downloads fail to -8008 errors and eventually -50.  Downloads did fail like this before, but rarely, on previous version; yes I restarted them and they would download fully.  However, lately everything is giving me -8008 or -50, and after the downloads get half way they restart from the beginning then eventually die and error out to -8008 or -50. It still happens when I re-direct the library back to the local drive too. Moreover, all the downloads now show up without the artwork, I can re-import all day long and they don't work; but if I open with Quicktime, the tv show/movie plays just fine but iTunes gives the ubiquitous
    I've moved the file to desktop to try and re-import - Fails
    Deleted but left file and dragged into iTunes to re-add - Fails
    Copied file to desktop, totally deleted, including "remove file" then dragged into iTunes - Fails
    Copied file to desktop, totally deleted, including "remove file", File > Add to Library - Fails

    Hi Michael,
    I'm glad iTunes is working now!  If you want to locate iTunes plug-ins on your computer, you can find them in the following locations (you may not have the iTunes Plug-ins folder if you do not have plug-ins):
    Mac:
    /Users/username/Library/iTunes/iTunes Plug-ins/
    /Library/iTunes/iTunes Plug-ins/
    One thing I also wanted to mention is that the "Users/username/Library" is hidden in Lion and Mountain Lion.  You can access it using these steps:
    OS X Mountain Lion: What is the Library folder?
    The Library folder contains files used by OS X and your apps, including your personal fonts and preferences. The Library folder is hidden. If you need to open it, make sure you are in the Finder, hold down the Option key, and then choose Go > Library.
    You can find the full article here:
    OS X Mountain Lion: What is the Library folder?
    http://support.apple.com/kb/PH11395
    All the best,
    Sheila M.

  • I'd like Time Machine to backup my personal account's files separately from my guest account's files. Both have big files.  What's the best way to do this?  Should I connect 2 external HD to my new iMac, one for each account?

    I'd like Time Machine to backup my personal account's files separately from my guest account's files. Both have big files.  What's the best way to do this?  Should I connect 2 external HD to my new iMac, one for each account?

    NeuroBrain wrote:
    Since my new external hard drive is have a lot of space, I'm thinking of splitting it for Time Machine and external storage.
    This is a common mistake and I highly advise against it.
    1: TimeMachine saves states of changes and thus requires more room on the TM drive than the boot drive it's backing up.
    2: Something happens to the TM drive, loss, theft, dropped, power surge, etc., you lose both backups.
    3: The storage drive might become a portable need, with it being on the TM drive, now your increasing the risk to the TM backup that something could happen to it along with the storage drive, due to increased movement.
    Seriously, have a read,
    Most commonly used backup methods
    it's ASC User Tip that saves us regulars all the trouble of having to repeat ourselves over and over again in the posts, because we tend to forget things too, or not here sometimes etc.
    "Plan for the worst and the good will take care of itself" - Donald Trump

  • IPod is recognized by iTunes and computer, but wil not sync songs

    iPod is recognized by iTunes and computer, but wil not sync songs. I have about 1400 songs and just got 40 more, and before I bought those 40 my iTunes was wiped out. I still had the files so my iTunes was filled up again. Now, when I connect my iPod to the computer it is recognized and then it says "do not disconnect" then after a few seconds it says "ipod update is complete- ok to disconnect". but it did not upload any songs. How can I make it work?

    If your iPod shows up in Explorer under 'my computer' try changing the drive letter:
    -- Click on Start=>Control Panel=>Admin Tools=>Computer Management=>Disk Management
    -- Right-Click on the iPod and select “change drive letter”
    -- Choose something further along in the alphabet that is unused (Usually, “M”, “N”, “O” or something similar)
    -- Safely eject the iPod
    -- Reboot the PC
    Then restore your iPod to factory settings.
    http://docs.info.apple.com/article.html?artnum=60983

  • I am trying to get space on an external hard drive which has some old time machine back up files that I do not need but can not eliminate, even by going into the time machine, clicking on the backup file to be eliminated and using the drop down eliminate

    I am trying to get space on an external hard drive which has some old time machine back up files that I do not need but can not eliminate, even by going into the time machine, clicking on the backup file to be eliminated and using the drop down menu with the gear box symbol to eliminate

    I cannot find this 300GB "Backup" in the Finder, only in the Storage info when I check "About This Mac".
    You are probably using Time Machine to backup your MacBook Pro, right? Then the additional 300 GB could be local Time Machine snapshots.  Time Machine will write the hourly backups to the free space on your hard disk, if the backup drive is temporarily not connected. You do not see these local backups in the Finder, and MacOS will delete them, when you make a regular backup to Time Machine, or when you need the space for other data.
    See Pondini's page for more explanation:   What are Local Snapshots?   http://pondini.org/TM/FAQ.html
    I have restarted my computer, but the information remains the same. How do I reclaim the use of the 300GB? Why is it showing up as "Backups" when it used to indicate "Photos"? Are my photos safe on the external drive?
    You have tested the library on the external drive, and so your photos are save there.  
    The local TimeMachine snapshot probably now contains a backup of the moved library.  Try, if connecting your Time Machine drive will reduce the size of your local Time Machine snapshots.

  • Photoshop CC slow in performance on big files

    Hello there!
    I've been using PS CS4 since release and upgraded to CS6 Master Collection last year.
    Since my OS broke down some weeks ago (RAM broke), i gave Photoshop CC a try. At the same time I moved in new rooms and couldnt get my hands on the DVD of my CS6 resting somewhere at home...
    So I tried CC.
    Right now im using it with some big files. Filesize is between 2GB and 7,5 GB max. (all PSB)
    Photoshop seem to run fast in the very beginning, but since a few days it's so unbelievable slow that I can't work properly.
    I wonder if it is caused by the growing files or some other issue with my machine.
    The files contain a large amount of layers and Masks, nearly 280 layers in the biggest file. (mostly with masks)
    The images are 50 x 70 cm big  @ 300dpi.
    When I try to make some brush-strokes on a layer-mask in the biggest file it takes 5-20 seconds for the brush to draw... I couldnt figure out why.
    And its not so much pretending on the brush-size as you may expect... even very small brushes (2-10 px) show this issue from time to time.
    Also switching on and off masks (gradient maps, selective color or leves) takes ages to be displayed, sometimes more than 3 or 4 seconds.
    The same with panning around in the picture, zooming in and out or moving layers.
    It's nearly impossible to work on these files in time.
    I've never seen this on CS6.
    Now I wonder if there's something wrong with PS or the OS. But: I've never been working with files this big before.
    In march I worked on some 5GB files with 150-200 layers in CS6, but it worked like a charm.
    SystemSpecs:
    I7 3930k (3,8 GHz)
    Asus P9X79 Deluxe
    64GB DDR3 1600Mhz Kingston HyperX
    GTX 570
    2x Corsair Force GT3 SSD
    Wacom Intous 5 m Touch (I have some issues with the touch from time to time)
    WIN 7 Ultimate 64
    all systemupdates
    newest drivers
    PS CC
    System and PS are running on the first SSD, scratch is on the second. Both are set to be used by PS.
    RAM is allocated by 79% to PS, cache is set to 5 or 6, protocol-objects are set to 70. I also tried different cache-sizes from 128k to 1024k, but it didn't help a lot.
    When I open the largest file, PS takes 20-23 GB of RAM.
    Any suggestions?
    best,
    moslye

    Is it just slow drawing, or is actual computation (image size, rotate, GBlur, etc.) also slow?
    If the slowdown is drawing, then the most likely culprit would be the video card driver. Update your driver from the GPU maker's website.
    If the computation slows down, then something is interfering with Photoshop. We've seen some third party plugins, and some antivirus software cause slowdowns over time.

  • "Storage is not configured" during failover - COH-1467 still not fixed?

    I am runing a test program using two cache nodes and one "client JVM", all on the same machine (the first cache node is used as WKA). When I kill one of the cache nodes and the restart it again I get the following exceptions:
    In the surviving cache node:
    2009-01-22 08:01:14.753/112.718 Oracle Coherence GE 3.4.1/407 <Error> (thread=DistributedCache, member=1): An exception (java.lang.ClassCastException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    2009-01-22 08:01:14.753/112.718 Oracle Coherence GE 3.4.1/407 <Error> (thread=DistributedCache, member=1): Terminating DistributedCache due to unhandled exception: java.lang.ClassCastException
    2009-01-22 08:01:14.753/112.718 Oracle Coherence GE 3.4.1/407 <Error> (thread=DistributedCache, member=1):
    java.lang.ClassCastException: java.lang.Long
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    On the restarted cache node:
    2009-01-22 08:01:15.722/2.220 Oracle Coherence GE 3.4.1/407 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from resource "file:/C:/Javaproj/Query/lib/master-coherence-cache-config.xml"
    2009-01-22 08:01:16.565/3.063 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Service Cluster joined the cluster with senior service member n/a
    2009-01-22 08:01:16.628/3.126 Oracle Coherence GE 3.4.1/407 <Info> (thread=Cluster, member=n/a): Failed to satisfy the variance: allowed=16, actual=31
    2009-01-22 08:01:16.628/3.126 Oracle Coherence GE 3.4.1/407 <Info> (thread=Cluster, member=n/a): Increasing allowable variance to 17
    2009-01-22 08:01:17.003/3.501 Oracle Coherence GE 3.4.1/407 <Info> (thread=Cluster, member=n/a): This Member(Id=5, Timestamp=2009-01-22 08:01:16.768, Address=138.106.109.121:54101, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22728, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) joined cluster with senior Member(Id=1, Timestamp=2009-01-22 07:59:24.098, Address=138.106.109.121:54100, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22948, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1)
    2009-01-22 08:01:17.065/3.563 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member(Id=4, Timestamp=2009-01-22 08:00:35.566, Address=138.106.109.121:8088, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:11544) joined Cluster with senior member 1
    2009-01-22 08:01:17.081/3.579 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1
    2009-01-22 08:01:17.081/3.579 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member 1 joined Service InvocationService with senior member 1
    2009-01-22 08:01:17.081/3.579 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCacheNoBackup with senior member 1
    2009-01-22 08:01:17.097/3.595 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member 4 joined Service InvocationService with senior member 1
    2009-01-22 08:01:17.097/3.595 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member 4 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:17.222/3.720 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): TcpRing: connecting to member 4 using TcpSocket{State=STATE_OPEN, Socket=Socket[addr=/138.106.109.121,port=8088,localport=3609]}
    2009-01-22 08:01:17.253/3.751 Oracle Coherence GE 3.4.1/407 <D5> (thread=Invocation:Management, member=5): Service Management joined the cluster with senior service member 1
    2009-01-22 08:01:17.393/3.891 Oracle Coherence GE 3.4.1/407 <D5> (thread=Invocation:InvocationService, member=5): Service InvocationService joined the cluster with senior service member 1
    2009-01-22 08:01:18.643/5.141 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:19.050/5.548 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:23.659/10.157 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:24.284/10.782 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:28.674/15.172 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:29.503/16.001 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:33.674/20.172 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:33.721/20.219 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:38.674/25.172 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:38.956/25.454 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:43.690/30.188 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:44.174/30.672 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:47.659/34.157 Oracle Coherence GE 3.4.1/407 <Error> (thread=Main Thread, member=5): Error while starting service "InvocationService": com.tangosol.net.RequestTimeoutException: Timeout during service start: ServiceInfo(Id=2, Name=InvocationService, Type=Invocation
    MemberSet=ServiceMemberSet(
    OldestMember=Member(Id=1, Timestamp=2009-01-22 07:59:24.098, Address=138.106.109.121:54100, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22948)
    ActualMemberSet=MemberSet(Size=3, BitSetCount=2
    Member(Id=1, Timestamp=2009-01-22 07:59:24.098, Address=138.106.109.121:54100, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22948)
    Member(Id=4, Timestamp=2009-01-22 08:00:35.566, Address=138.106.109.121:8088, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:11544)
    Member(Id=5, Timestamp=2009-01-22 08:01:16.768, Address=138.106.109.121:54101, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22728)
    MemberId/ServiceVersion/ServiceJoined/ServiceLeaving
    1/3.1/Thu Jan 22 07:59:27 CET 2009/false,
    4/3.1/Thu Jan 22 08:00:36 CET 2009/false,
    5/3.1/Thu Jan 22 08:01:17 CET 2009/false
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onStartupTimeout(Grid.CDB:6)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:38)
         at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:28)
         at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
         at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:841)
         at com.tangosol.net.DefaultCacheServer.start(DefaultCacheServer.java:140)
         at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:61)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.intellij.rt.execution.application.AppMain.main(AppMain.java:90)
    Exception in thread "Main Thread" com.tangosol.net.RequestTimeoutException: Timeout during service start: ServiceInfo(Id=2, Name=InvocationService, Type=Invocation
    MemberSet=ServiceMemberSet(
    OldestMember=Member(Id=1, Timestamp=2009-01-22 07:59:24.098, Address=138.106.109.121:54100, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22948)
    ActualMemberSet=MemberSet(Size=3, BitSetCount=2
    Member(Id=1, Timestamp=2009-01-22 07:59:24.098, Address=138.106.109.121:54100, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22948)
    Member(Id=4, Timestamp=2009-01-22 08:00:35.566, Address=138.106.109.121:8088, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:11544)
    Member(Id=5, Timestamp=2009-01-22 08:01:16.768, Address=138.106.109.121:54101, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22728)
    MemberId/ServiceVersion/ServiceJoined/ServiceLeaving
    1/3.1/Thu Jan 22 07:59:27 CET 2009/false,
    4/3.1/Thu Jan 22 08:00:36 CET 2009/false,
    5/3.1/Thu Jan 22 08:01:17 CET 2009/false
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onStartupTimeout(Grid.CDB:6)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:38)
         at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:28)
         at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
         at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:841)
         at com.tangosol.net.DefaultCacheServer.start(DefaultCacheServer.java:140)
         at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:61)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.intellij.rt.execution.application.AppMain.main(AppMain.java:90)
    2009-01-22 08:01:47.659/34.157 Oracle Coherence GE 3.4.1/407 <Error> (thread=Invocation:InvocationService, member=5): validatePolls: This service timed-out due to unanswered handshake request. Manual intervention is required to stop the members that have not responded to this Poll
    PollId=1, active
    InitTimeMillis=1232607677393
    Service=InvocationService (2)
    RespondedMemberSet=[1]
    LeftMemberSet=[]
    RemainingMemberSet=[4]
    2009-01-22 08:01:47.659/34.157 Oracle Coherence GE 3.4.1/407 <D5> (thread=Invocation:InvocationService, member=5): Service InvocationService left the cluster
    2009-01-22 08:01:47.659/34.157 Oracle Coherence GE 3.4.1/407 <D4> (thread=ShutdownHook, member=5): ShutdownHook: stopping cluster node
    Process finished with exit code 1
    On the client JVM:
    2009-01-22 08:01:14.815/41.265 Oracle Coherence GE 3.4.1/407 <D5> (thread=Invocation:InvocationService, member=4): Repeating AggregateFilterRequest due to the re-distribution of PartitionSet[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127]
    java.lang.RuntimeException: Storage is not configured
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.onMissingStorage(DistributedCache.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:31)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.aggregate(DistributedCache.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.aggregate(DistributedCache.CDB:52)
         at com.tangosol.coherence.component.util.SafeNamedCache.aggregate(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.NearCache.aggregate(NearCache.java:453)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.typeSearch(ValueQueryInvocable.java:260)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.queryStringFirstSearch(ValueQueryInvocable.java:300)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.run(ValueQueryInvocable.java:135)
         at com.scania.oas.coherence.invocables.InvocableWrapper.run(InvocableWrapper.java:54)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService.onInvocationRequest(InvocationService.CDB:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService$InvocationRequest.onReceived(InvocationService.CDB:40)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    java.lang.RuntimeException: Storage is not configured
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.onMissingStorage(DistributedCache.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:31)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.aggregate(DistributedCache.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.aggregate(DistributedCache.CDB:52)
         at com.tangosol.coherence.component.util.SafeNamedCache.aggregate(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.NearCache.aggregate(NearCache.java:453)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.queryStringSearch(ValueQueryInvocable.java:268)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.queryStringFirstSearch(ValueQueryInvocable.java:297)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.run(ValueQueryInvocable.java:135)
         at com.scania.oas.coherence.invocables.InvocableWrapper.run(InvocableWrapper.java:54)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService.onInvocationRequest(InvocationService.CDB:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService$InvocationRequest.onReceived(InvocationService.CDB:40)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    I even tried to re-write the "run method" in tthe invocable in a way that caused it to, in a loop, perform a delay and then re-try its calculations when it received a runtime exception with the text "Storage is not configured" causing the retreival a new named cache each time but this did not help - it never seemed to recover...
    Since I dont see any of my application classes in the "class cast" trace-back I assume it is an Coherence internal problem or can you thiink about some user programming error that could cause it? I am by the way not using any long or "Long in my application...
    Best Regards
    Magnus

    Hi Magnus,
    The log you provided seems to indicate that the problem was caused by the de-serialization of the “AggregateFilterRequest” message. The only explanation we have is that you are using a custom Filter that has asymmetrical serialization/deserialization routines, causing this failure. Could you please send us the corresponding client code?
    Meanwhile, we will open a JIRA issue, to make sure that Coherence handles this kind of error more gracefully.
    -David

  • Adobe Photoshop CS3 collapse each time it load a big file

    I was loading a big file of photos from iMac iPhoto to Adobe Photoshop CS3 and it keep collapsing, yet each time I reopen photoshop it load the photos again and collapse again. is there a way to stop this cycle?

    I don't think that too many users here actually use iPhoto (even the Mac users)
    However, Google is your friend. A quick search came up with some other non-Adobe forum entries:
    .... but the golden rule of iPhoto is NEVER EVER MESS WITH THE IPHOTO LIBRARY FROM OUTSIDE IPHOTO.In other words, anything you might want to do with the pictures in iPhoto can be done from *within the program,* and that is the only safe way to work with it. Don't go messing around inside the "package" that is the iPhoto Library unless you are REALLY keen to lose data, because that is exactly what will happen.
    .....everything you want to do to a photo in iPhoto can be handled from *within the program.* This INCLUDES using a third-party editor, and saves a lot of time and disk space if you do this way:
    1. In iPhoto's preferences, specify a third-party editor (let's say Photoshop) to be used for editing photos.
    2. Now, when you right-click (or control-click) a photo in iPhoto, you have two options: Edit in Full Screen (ie iPhoto's own editor) or Edit with External Editor. Choose the latter.
    3. Photoshop will open, then the photo you selected will automatically open in PS. Do your editing, and when you save (not save as), PS "hands" the modified photo back to iPhoto, which treats it exactly the same as if you'd done that stuff in iPhoto's own editor and updates the thumbnail to reflect your changes. Best of all, your unmodified original remains untouched so you can always go back to it if necessary.

  • Error in loading big file

    Hi All,
    I have an application on WL8.1 with sp3, the database is SQL Server 2000. I have code below to load local file into database. The data type in database is image.
    PreparedStatement pStatement = null;
    InputStreammyStream myStream = new InputStream();
    myStream.setEmbeddedStream( is );
    pStatement.setBinaryStream( 1, myStream, -1 );
    pStatement.executeUpdate();
    pStatement.close();
    pStatement = null;
    is is InputStream from a local file and the sql statement is
    insert into file_content(content) values(?)
    This workes fine for the files with size less than 150M, but for those big files (>150M), it doesn't work and I got error message as below:
    <Feb 11, 2005 12:00:41 PM PST> <Notice> <EJB> <BEA-010014> <Error occurred while attempting to rollback transaction: javax.transaction.SystemException: Heuristic hazard: (weblogic.jdbc.wrapper.JTSXAResourceImpl, HeuristicHazard, (javax.transaction.xa.XAException: [BEA][SQLServer JDBC Driver]Object has been closed.))
    javax.transaction.SystemException: Heuristic hazard: (weblogic.jdbc.wrapper.JTSXAResourceImpl, HeuristicHazard, (javax.transaction.xa.XAException: [BEA][SQLServer JDBC Driver]Object has been closed.))
    at weblogic.transaction.internal.ServerTransactionImpl.internalRollback(ServerTransactionImpl.java:396)
    at weblogic.transaction.internal.ServerTransactionImpl.rollback(ServerTransactionImpl.java:362)
    Any body can help? Thanks in advance.

    Fred Wang wrote:
    Hi All,
    I have an application on WL8.1 with sp3, the database is SQL Server 2000. I have code below to load local file into database. The data type in database is image.
    PreparedStatement pStatement = null;
    InputStreammyStream myStream = new InputStream();
    myStream.setEmbeddedStream( is );
    pStatement.setBinaryStream( 1, myStream, -1 );
    pStatement.executeUpdate();
    pStatement.close();
    pStatement = null;
    is is InputStream from a local file and the sql statement is
    insert into file_content(content) values(?)
    This workes fine for the files with size less than 150M, but for those big files (>150M), it doesn't work and I got error message as below:
    <Feb 11, 2005 12:00:41 PM PST> <Notice> <EJB> <BEA-010014> <Error occurred while attempting to rollback transaction: javax.transaction.SystemException: Heuristic hazard: (weblogic.jdbc.wrapper.JTSXAResourceImpl, HeuristicHazard, (javax.transaction.xa.XAException: [BEA][SQLServer JDBC Driver]Object has been closed.))
    javax.transaction.SystemException: Heuristic hazard: (weblogic.jdbc.wrapper.JTSXAResourceImpl, HeuristicHazard, (javax.transaction.xa.XAException: [BEA][SQLServer JDBC Driver]Object has been closed.))
    at weblogic.transaction.internal.ServerTransactionImpl.internalRollback(ServerTransactionImpl.java:396)
    at weblogic.transaction.internal.ServerTransactionImpl.rollback(ServerTransactionImpl.java:362)
    Any body can help? Thanks in advance.I already answered this as to the cause, in the ms newsgroups... It's the DBMS choking so
    bad on the size of your image file that it actually kills the connection. Note that the DBMS
    has to save the whole image to log as well as to the DBMS table. The fundamental response is
    to get DBA help to configure the DBMS to be able to do what you want. If you want and can,
    you could split your image column into multiple image columns, and split your data into
    100meg chunks, and then enter an empty row and then update it column by column, and then
    concatenate the data again in the cleint when you get it out, etc, but much better to post
    to the MS server newsgroups for ideas.
    Joe

Maybe you are looking for