Is it a Limitation of Named Cache Storage- Fails for large volume ???

I debugged the code which loads data from database into the cache as mentioned in the posting : Pre-loading the Cache from Database during application start-up
Now what this code does is load 869 rows from database into java.util.Map using Hibernate loadAll() method. All is fine uptill this point.
The next step is to putAll the entries into cache i.e. contactCache.putAll(buffer). This is where it hungs for a min and i see org.eclipse.jdi.TimeoutException followed by below exception stack trace
IN DEFAULT CACHE SERVER JVM
2009-10-30 10:53:44.076/1342.849 Oracle Coherence GE 3.5.2/463 <Warning> (thread=PacketPublisher, member=1): Experienced a 1390 ms communication delay (probable remote GC) with Member(Id=2, Timestamp=2009-10-30 10:31:54.697, Address=165.137.250.122:8089, MachineId=54906, Location=site:cable.comcast.com,machine:PACDCL-CJWWND1b,process:4856); 23 packets rescheduled, PauseRate=0.0010, Threshold=2080
2009-10-30 11:06:10.060/2088.833 Oracle Coherence GE 3.5.2/463 <Error> (thread=Cluster, member=1): Attempting recovery (due to soft timeout) of Guard{Daemon=DistributedCache}
2009-10-30 11:06:12.430/2091.203 Oracle Coherence GE 3.5.2/463 <Error> (thread=Cluster, member=1): Terminating guarded execution (due to hard timeout) of Guard{Daemon=DistributedCache}
2009-10-30 11:06:15.657/2094.430 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
2009-10-30 11:06:15.954/2094.727 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
Coherence <Error>: Halting this cluster node due to unrecoverable service failure
2009-10-30 11:06:16.671/2095.444 Oracle Coherence GE 3.5.2/463 <Error> (thread=Termination Thread, member=1): Full Thread Dump
Thread[Cluster|Member(Id=1, Timestamp=2009-10-30 10:31:31.621, Address=165.137.250.122:8088, MachineId=54906, Location=site:cable.comcast.com,machine:PACDCL-CJWWND1b,process:5380),5,Cluster]
     java.lang.Object.wait(Native Method)
     com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
     com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
     java.lang.Thread.run(Thread.java:595)
Thread[(Code Generation Thread 1),5,system]
Thread[(Signal Handler),5,system]
Thread[TcpRingListener,6,Cluster]
     java.net.PlainSocketImpl.socketAccept(Native Method)
     java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
     java.net.ServerSocket.implAccept(ServerSocket.java:450)
     java.net.ServerSocket.accept(ServerSocket.java:421)
     com.tangosol.coherence.component.net.socket.TcpSocketAccepter.accept(TcpSocketAccepter.CDB:18)
     com.tangosol.coherence.component.util.daemon.TcpRingListener.acceptConnection(TcpRingListener.CDB:10)
     com.tangosol.coherence.component.util.daemon.TcpRingListener.onNotify(TcpRingListener.CDB:9)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
     java.lang.Thread.run(Thread.java:595)
Thread[PacketSpeaker,8,Cluster]
     java.lang.Object.wait(Native Method)
     com.tangosol.coherence.component.util.queue.ConcurrentQueue.waitForEntry(ConcurrentQueue.CDB:16)
     com.tangosol.coherence.component.util.queue.ConcurrentQueue.remove(ConcurrentQueue.CDB:7)
     com.tangosol.coherence.component.util.Queue.remove(Queue.CDB:1)
     com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketSpeaker.onNotify(PacketSpeaker.CDB:62)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
     java.lang.Thread.run(Thread.java:595)
Thread[PacketPublisher,6,Cluster]
     java.lang.Object.wait(Native Method)
     com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
     com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.onWait(PacketPublisher.CDB:2)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
     java.lang.Thread.run(Thread.java:595)
Thread[(VM Periodic Task),10,system]
Thread[(Sensor Event Thread),5,system]
Thread[(Attach Listener),5,system]
Thread[(GC Main Thread),5,system]
Thread[(Code Optimization Thread 1),5,system]
Thread[Invocation:Management:EventDispatcher,5,Cluster]
     java.lang.Object.wait(Native Method)
     com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
     com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onWait(Service.CDB:7)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
     java.lang.Thread.run(Thread.java:595)
Thread[Main Thread,5,main]
     java.lang.Object.wait(Native Method)
     com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:79)
Thread[Logger@9265725 3.5.2/463,3,main]
     java.lang.Object.wait(Native Method)
     com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
     java.lang.Thread.run(Thread.java:595)
Thread[Invocation:Management,5,Cluster]
     java.lang.Object.wait(Native Method)
     com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
     com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
     java.lang.Thread.run(Thread.java:595)
Thread[Reference Handler,10,system]
     java.lang.ref.Reference.getPending(Native Method)
     java.lang.ref.Reference.access$000(Unknown Source)
     java.lang.ref.Reference$ReferenceHandler.run(Unknown Source)
Thread[PacketListenerN,8,Cluster]
     java.net.PlainDatagramSocketImpl.receive0(Native Method)
     java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
     java.net.DatagramSocket.receive(DatagramSocket.java:712)
     com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
     com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
     com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
     java.lang.Thread.run(Thread.java:595)
Thread[Finalizer,8,system]
     java.lang.Thread.run(Thread.java:595)
Thread[DistributedCache,5,Cluster]
     com.tangosol.util.Binary.<init>(Binary.java:87)
     com.tangosol.util.Binary.<init>(Binary.java:61)
     com.tangosol.io.AbstractByteArrayReadBuffer.toBinary(AbstractByteArrayReadBuffer.java:152)
     com.tangosol.io.pof.PofBufferReader.readBinary(PofBufferReader.java:3412)
     com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:2854)
     com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2600)
     com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
     com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
     com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
     com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.MapRequest.read(MapRequest.CDB:24)
     com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:123)
     com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
     java.lang.Thread.run(Thread.java:595)
Thread[PacketReceiver,7,Cluster]
     java.lang.Object.wait(Native Method)
     com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
     com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketReceiver.onWait(PacketReceiver.CDB:2)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
     java.lang.Thread.run(Thread.java:595)
Thread[PacketListener1,8,Cluster]
     java.net.PlainDatagramSocketImpl.receive0(Native Method)
     java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
     java.net.DatagramSocket.receive(DatagramSocket.java:712)
     com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
     com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
     com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
     com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
     java.lang.Thread.run(Thread.java:595)
Thread[Termination Thread,5,Cluster]
     java.lang.Thread.dumpThreads(Native Method)
     java.lang.Thread.getAllStackTraces(Thread.java:1434)
     sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
     sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
     java.lang.reflect.Method.invoke(Method.java:585)
     com.tangosol.net.GuardSupport.logStackTraces(GuardSupport.java:791)
     com.tangosol.coherence.component.net.Cluster.onServiceFailed(Cluster.CDB:5)
     com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$Guard.terminate(Grid.CDB:17)
     com.tangosol.net.GuardSupport$2.run(GuardSupport.java:652)
     java.lang.Thread.run(Thread.java:595)
2009-10-30 11:06:20.958/2099.731 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
2009-10-30 11:06:20.958/2099.731 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): TcpRing: disconnected from member 2 due to a kill request
2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Member 2 left service Management with senior member 1
2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Member 2 left service DistributedCache with senior member 1
2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=2009-10-30 11:07:17.682, Address=165.137.250.122:8089, MachineId=54906, Location=site:cable.comcast.com,machine:PACDCL-CJWWND1b,process:4856) left Cluster with senior member 1
2009-10-30 11:07:17.682/2156.455 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Service guardian is 51795ms late, indicating that this JVM may be running slowly or experienced a long GC
2009-10-30 11:07:18.073/2156.846 Oracle Coherence GE 3.5.2/463 <Info> (thread=PacketListenerN, member=1): Scheduled senior member heartbeat is overdue; rejoining multicast group.
2009-10-30 11:07:22.696/2161.469 Oracle Coherence GE 3.5.2/463 <Error> (thread=Cluster, member=1): Attempting recovery (due to soft timeout 26277ms ago) of Guard{Daemon=TcpRingListener}
2009-10-30 11:07:22.696/2161.469 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
2009-10-30 11:07:22.696/2161.469 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
2009-10-30 11:07:26.835/2165.608 Oracle Coherence GE 3.5.2/463 <Info> (thread=PacketListenerN, member=1): Scheduled senior member heartbeat is overdue; rejoining multicast group.
2009-10-30 11:07:27.709/2166.482 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
2009-10-30 11:07:27.709/2166.482 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
2009-10-30 11:07:32.723/2171.496 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
2009-10-30 11:07:32.723/2171.496 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
2009-10-30 11:07:42.796/2181.569 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
2009-10-30 11:07:42.843/2181.616 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
2009-10-30 11:07:42.890/2181.663 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Service guardian is 10089ms late, indicating that this JVM may be running slowly or experienced a long GC
2009-10-30 11:07:42.968/2181.741 Oracle Coherence GE 3.5.2/463 <Info> (thread=PacketListenerN, member=1): Scheduled senior member heartbeat is overdue; rejoining multicast group.
2009-10-30 11:07:47.857/2186.630 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
2009-10-30 11:07:47.935/2186.708 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
2009-10-30 11:07:50.527/2189.300 Oracle Coherence GE 3.5.2/463 <Info> (thread=PacketListenerN, member=1): Scheduled senior member heartbeat is overdue; rejoining multicast group.
2009-10-30 11:07:52.948/2191.721 Oracle Coherence GE 3.5.2/463 <Info> (thread=Main Thread, member=1): Restarting Service: DistributedCache
2009-10-30 11:07:52.948/2191.721 Oracle Coherence GE 3.5.2/463 <Error> (thread=Main Thread, member=1): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=129, BackupPartitions=128}
- SQL Error: 1400, SQLState: 23000
- ORA-01400: cannot insert NULL into ("CTXOWNER"."CTX_TRM_TXTS"."CTX_TRM_TXT_ID")
- SQL Error: 1400, SQLState: 23000
- ORA-01400: cannot insert NULL into ("CTXOWNER"."CTX_TRM_TXTS"."CTX_TRM_TXT_ID")
Coherence <Error>: Halting this cluster node due to unrecoverable service failureNow i do see its complaining about cannot insert null values. But wondering how come it was able to insert from database into java.util.Map. Its matter of dumping from Map to Coherence Cache which is another Map
IN CACHE FACTORY VM
Map (com.comcast.customer.contract.contract.hibernate.Term):
2009-10-30 11:06:46.076/2095.134 Oracle Coherence GE 3.5.2/463 <Warning> (thread=PacketPublisher, member=2): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=3, Timestamp=2009-10-30 10:52:20.758, Address=165.137.250.122:8090, MachineId=54906, Location=site:cable.comcast.com,machine:PACDCL-CJWWND1b,process:2756)
by MemberSet(Size=1, BitSetCount=2
  Member(Id=1, Timestamp=2009-10-30 10:31:31.621, Address=165.137.250.122:8088, MachineId=54906, Location=site:cable.comcast.com,machine:PACDCL-CJWWND1b,process:5380)
Map (com.comcast.customer.contract.contract.hibernate.Term):
Map (com.comcast.customer.contract.contract.hibernate.Term): 2009-10-30 11:06:46.887/2095.945 Oracle Coherence GE 3.5.2/463 <Error> (thread=PacketPublisher, member=2): This node appears to have become disconnected from the rest of the cluster containing 2 nodes. All departure confirmation requests went unanswered.
Stopping cluster service.
Map (com.comcast.customer.contract.contract.hibernate.Term): 2009-10-30 11:06:48.773/2097.831 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=2): Service Cluster left the cluster
2009-10-30 11:06:49.257/2098.315 Oracle Coherence GE 3.5.2/463 <D5> (thread=Invocation:Management, member=2): Service Management left the cluster
2009-10-30 11:06:49.257/2098.315 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Service DistributedCache left the clusterIN JUnit Test VM
Coherence <Error>: Halting this cluster node due to unrecoverable service failurePlease note i am running Default Cache Server VM, Cache Factory VM, Eclipse JUnit Test VM in the same machine.
Please note the same piece of code works absolutely fine when i load other object which return 154 rows.

Thanks for quick response.
>
So using the local scheme, you place 869 objects into that cache, correct? Does that work?
I didn't tried with local scheme. But i did try with <read-write-backing-map> scheme as it was giving problem i reduced the size to 100 & changed to local-scheme.
If you would like me to try with local-scheme i would do so but it will not prove anything as we need Hibernate Cache store to do write's
>
Can you explain what the remaining issue is? (What part is failing?)
There are several issues and i am really striving to make it work :)-
Here is the list
- revert back to <read-write-backing-map> scheme so that i can pre-poulate the cache from database so that subsequent reads and writes hit the cache instead of database
- to pre-populate the cache during application start-up . We use Spring 2.5, Hibernate 3.2
- the queryContract(contract) method is similar to Search screen i.e. it takes sample contract object as an argument with some attributes populated. I am using Filter API to return the List of Contract objects based on the search parameters of sample contract as follows
Filter filter = new EqualsFilter(IdentityExtractor.INSTANCE, contract);
Set setEntries = contractCache.entrySet(filter); The above code expects all the attributes of sample contract object are fully populated and if not it throws Null Pointer Exception
For example if date attribute is null then Null Pointer Exception is thrown at the following line
writer.writeLong(2, this.date.getTimeInMillis());
I greatly appreciate the inventor of Tangosol Coherence product responding to my queries on the forum. Hopefully with his help i will be able to resolve these issues :)-

Similar Messages

  • Storage solution for large scale video editing?

    I need to edit my way through over 1TB of captured 1080p HD video footage. I have a more than competent Mac Pro to do the job but the internal drives are quite full so rendering will be an issue.The footage I need to edit is currently on a Western Digital My Book.
    What is a good reasonably priced system that I can use to edit through and back up to?
    I'm thinking about 3-4TB size.
    I've been looking at the Drobo but it seems that it won't be able to pull the clips through to the timeline fast enough.
    Many Thanks

    I'd suggest looking at the OWC Mercury Rack Mount or Desktop disk systems. I'd get a bare unit and put in my own drives - Seagate Barracuda 7200.11, 7200.12 or ES.2 drives or WD Caviar SE 16 or Caviar Black drives. I'd stay away from any green or low-power drives at this point. You can do this for less than about £500.
    You will probably want to put an eSATA card in your MacPro so you can connect via eSATA rather than FW800.
    Personally, I'd rather myself use a JBOD unit w/ 4-1TB drives inside but most of the multi-unit enclosures these days are RAID only. One model of the OWC Rack series enables 4xJBOD via eSATA.
    I would also get my footage off that MyBook ASAP.

  • How do I change the cache storage directory?

    I want to access, and change, my cache storage directoy

    1. Enter "about:config" in the Firefox's location address field.
    2. Accept the warning about Dragons and promise to be careful.
    3. Type "cache.disk.parent" in the Filter field.
    4. Double click the "browser.cache.disk.parent_directory" entry.
    5. In the "Enter string value" dialog enter your new cache directory path. For example: "E:\Temp\Firefox_Temporary_Cache". Click OK.
    6 . Restart Firefox
    7. Check your new cache location by repeating steps 1 through 3.

  • What's the best storage solution for a large iLife? RAID? NAS?

    I'm looking for an affordable RAID storage solution for my Time Machine, iTunes Library, iMovie videos, and iPhoto Library. To this point I've been doing a hodgepodge of external hard drives without the saftey of redundancy and I've finaly been bitten with HD failures. So I'm trying to determine what would be the best recommendation for my scenario. Small Home Office for my wife's business (just her), and me with all our media. I currentlty have a mid-2010 Mac Mini (no Thunderbolt), she has an aging 2007 iMac and 2006 MacBook Pro (funny that they're all about the same benchmark speed). We have an AppleTV (original), iPad2 and two iPhone 4S's.
    1st Question: Is it better to get a RAID and connect it to my Airport Extreme Base Station USB port as a shared disk? OR to connect it directly to my Mac Mini and share through Home Sharing? OR Should I go with a NAS RAID?
    2nd Question: Simple is Better. Should I go with a Mac Mini Server and connect drives to it? (convert my Mac Mini into a server) or Should I just get one of those nice all-in-one 4-bay RAID drive solutions that I can expand with?
    Requirements:
    1. Expandable and Upgradeable. I don't want something limited to 2TB drives, but as drives get bigger and cheaper I want to easily throw one in w/o concerns.
    2. Simple integration with Time Machine and my iLife: iTunes, iMovie, iPhoto. If iTune's Home Sharing feature is currently the best way of using my media across multiple devices then why mess with it? I see "DLNA certified" storage on some devices and wonder if that would just add another layer of complexity I don't need. One more piece to make compatible.
    3. Inexpensive. I totally believe in the "You Get What You Pay For" concept. But I also realize sometimes I'm buying marketing, not product. I imagine that to start, I'm going to want a diskless system (because of $$$) to throw all my drives into, and then upgrade bigger drives as my data and funds grow.
    4. Security. I don't know if its practical, but I like the idea of being able to pop two drives out and put them in my safe and then pop them back in once a week for the backup/mirroring. I like this idea because I'm concerned that onsite backup is not always the safest. Unfortunately those cloud based services aren't designed for Terabytes of raw family video, or an entire media library that isn't wholey from the iTunes Store. I can't be the only one facing this challenge. Surely there's an affordable way to keep a safe backup for the average Joe. But what is it?
    5. Not WD. I've had bad experiences with Western Digital drives, and I loathe their consumer packaged backup software that comes preloaded on their external drives. They are what I meant when I say you get what you pay for. Prettily packed garbage.
    6. Relatively Fast. I have put all my media on an external drive before (back when it fit on one drive) and there's noticeable spool-up hang time. Thunderbolt's nice and all, but so new that its not easily available across devices, nor is it cheap. eSata is not really an option. I love Firewire but I'm getting the feeling that Apple has made it the red-headed step-child of connections. USB 3.0 looks decent, but like eSata, Apple doesn't recognize it exists. Where does that leave us? Considering this dilemma I really liked Seagate's GoFlex external drives because it meant I could always buy a new base and still be compatible. But that only works with single drives. And as impressive as Seagate is, we can't expect them to consistently double drive sizes every two years like they have been -cool as that may be.
    So help me out without getting too technical. What's the best setup? Is it Drobo? Thecus? ReadyNAS? Seagate's BlackArmor? Or something else entirely?
    All comments are appreciated. Thanks in advance.

    I am currently using WD 2TB Thunderbolt hard drive for my iTunes, which i love and is works great.  i am connected directly to my Mac Book Pro. I am running low on Memory and thinking of buying a bigger Hard drive.  My question is should I buy 6TB thunderbolt HD or 6TB NAS drive to work solely for iTunes.  I have home sharing enabled for my Apple TV 
    I also have my time capsule connected just as back up only.   

  • Storage issues for Proxies using Final Cut Server

    Hi there,
    we have a fairly high amount of material that is just being put into Final Cut Server to be archived again.
    I dont mind the Xserve being busy to create these proxy files, but they tie up too much space!
    (Maths: 500 GB / 40 h of DVCAM footage result in more than 100 GB proxy files).
    We have those 40 h running though more than once a month plus a whole LOT of material from the last 2 years - and our Xsan is only 7 TB in size.
    Although we could theoretically buy another fiber raid this solution is not really future proof - it just pushes the time when we have to buy the next one a couple of months forward.. on top of that I cannot afford to have expensive, fast fiber channel storage used for proxies of files that are long archived and have only very limited use (and IF we need them, stick in the archive device and done).
    Any ideas how to get rid of proxy files from archived assets?
    I dont really want to take pen and paper and delete the proxies of the files by hand from the bundle.. dont think FCSvr will like this either.
    thanks for any advice
    tobi

    So I'm not sure how your math is adding 100GB of proxy files
    Are you creating VersionS and/or Edit Proxies of everything?
    I ask because using the default Clip Proxy setting gives you file sizes similar to the ones below. These numbers aren't accurate because the default Transcode setting uses Variable Bit Rate (VBR) encoding for both video and audio, but assuming you had a relative constant 800kbps stream here's how large your Proxies.bundle file should be
    800kbps * 30secs = 2.4mb
    800kbps * 60secs = 4.8mb
    800kbps * 60secs * 60min = 280.8mb per hour
    280.8mb per hours * 40= 11.2GB
    Also note, that deleting an asset from FCSvr doesn't delete the proxy files so you could have a lot of proxies left over from a few historical scans.

  • External Storage recommended for use with FCPX

    Hi everyone,
    I want to improve FCPX performance and work with my events and projects on one or two external HDDs.
    I am an amateur editor and mostly work with    720p, 120 fps, h.264 optimized to ProRes 422 footage on mostly single camera projects.
    Can anyone recommend a portable product that will work for my purposes?  Preferably one that I don't have to spend $1000+ on (like drobo or promise).
    Is an external USB3 Drive powerful enough? Do I need thunderbolt?
    Thanks so much for your answers and help!
    Mike
    P.s. Working with a MBP 15" Retina, 2,6 GHz i7, 16GB RAM, 750GB SSD.

    120fps?  What camera did this media come from?
    FW800 minimum, Thunderbolt is best.  You're budget will determine your limits.  Will USB 3.0 be enough?  It all depends on what format you're editing.  If you transcode everything to ProRes, just about anything will work. Firewire 800 will work, unless you're doing a lot of compositing and/or very long projects.
    Get the format you can afford.  Be sure spinning disk drives (your price range) are 7200rpm!  Vital!
    Minimum would be FW800, next up would be USB 3.0, next up would be Thunderbolt, as far as speed.  Avoid SSD drives as you'll not get much storage space for the amount you sound like you have to spend.

  • Best storage set-ups for large Libraries?

    Let's say "large Library" means 500 GB and/or 500,000 Images, and above.
    What set-ups for storage, on a Mac maximized with RAM, will provide the best Aperture performance?  The best bang-for-the-buck?  Assume Thunderbolt is available.  I don't know much about hardware: RAID, hybrid drives, etc.
    What sensible considerations should one make for backup and data-redundancy?
    How can I determine if the storage media is a bottleneck in the performance of Aperture on my system?
    I run most libraries off of external USB-3 drives mounted (sometimes directly, often via powered USB-3 hubs) to my
    MacBook Pro
    Retina, 15-inch, Early 2013
    Processor  2.7 GHz Intel Core i7
    Memory  16 GB 1600 MHz DDR3
    Graphics  NVIDIA GeForce GT 650M 1024 MB
    System Drive  APPLE SSD SD512E
    This works well for small and medium-size Libraries, but for large Libraries, I'm spending costly time waiting for the system to catch up.
    Some of that, demonstrably, comes when I run a three-monitor set-up.  (But this provides some welcome advantages to me.)
    Additionally, some back-ups and database repairs now take 12 hr. or longer.
    Thanks.

    Thanks William,
    I kept my c. 2011 MPB and use it for making back-ups, which is done automatically after I leave for the day.  My early-2013 MPB is so much faster (and has such a higher-resolution screen) that I don't use the older computer at all.
    William Lloyd wrote:
    Probably the fastest storage you can get is the La Cie "little big disk" which is a pair of 512 GB SSDs with a Thunderbolt 2 connection. The issue is, only the newest Mac Pro, and the newest retina MacBook Pros, have Thunderbolt 2 right now.
    OWC tech explained to me that TBolt2 allows 2x the throughput, but that the drives have to be able to provide it.  TBolt1 should provide 1,000 MB/s (do I have the units correct?), which is faster than most drives can provide.  So the bottleneck, for me, isn't likely to be the port, but rather the drives.  USB-3 can move 500 MB/s, which is still faster than -- afaict -- what my WD Passport drives can provide.
    As I currently see it, I need faster throughput, and I need to either trim my large Libraries or find a way to manage them so that the regularly used files are more speedily available.
    Regarding faster throughput, an external SSD comes to mind.
    The problem, for me, is that the large Libraries are close to 1TB (with Previews).  While I don't expect them to grow (the data acquisition phase is, for now, done), it would be short-sighted to assume they won't.  That brings up the second consideration, which is how to best use spanned drives that contain an Aperture Library.
    As I see it (with my limited understanding of hardware), what I really want is _a lot more RAM_, or, barring that, a huge SSD scratch disk.  I have 200 GB free on my system drive, which is an Apple-supplied SSD, but it doesn't seen to be used as effectively as I'd like.
    WD is now selling a new portable TBolt SSD, the "My Passport Pro", available with 4 GB of storage, and with a throughput of 230 MB/s.  My current largest Library is on WB Passport drives, whose throughput is not faster than 480 Mb/s (Max) (I assume BITS, not BYTES, so 40 MB/s).  That's a huge difference, while still only 1/4 of the speed possible with TBolt1, and 1/8 the throughput possible with TBolt2 (which my early-2013 MBP does not have, afaict).
    These are the questions I am trying to answer:
    - How can I measure my current throughput?
    - Can I determine the bottleneck in my use of large Libraries in Aperture?
    - Will new storage give me faster performance?
    - Which storage will give me the best "bang-for-my-bucks"?  (The La Cie "little big disk" seems to have it's bang limited by my computer, which greatly lowers its quotient value.)
    In short: how can I get 900-1000 MB/s throughput on my machine, and is there any way to set up my Library so that, even though it is close to 1 TB, I can use a smaller very fast drive for the most read/written files?
    --Kirby.

  • Multiple key cache lookup cases for the same values

    Hi,
    Just curious whether someone else on this forum has dealt with this use case: we'd like to use the Coherence cache to store objects of say class Foo with fields a and b (Foo(a,b)) using a as the key. The named cache is backed by a database and puts will insert into the corresponding Foo table and gets will either result in a cache hit or read through using a CacheStore implementation we'd write.
    Now, for the 2nd requirement, we need to look up the same objects using field b (or a combination of different fields for that matter). Currently we are thinking of a 2nd named cache that maps b onto Foo(a,b) with a possible optimization that the 2nd cache will map b onto a so the client doing the get can turn around and query the first cache using a. Puts in the first cache will add entries to the second cache to keep the 2nd cache up to date with a -> b mappings. The optimization prevents Foo being stored in two caches.
    Note that we will not store all entries for Foo in the cache as the sheer number of expected entries makes this option not feasible hence we cannot rely on a cache query (using indexes) to look the object up.
    Any comments on this approach or ideas on how to implement this differently?
    Thanks!
    Marcel.

    Hi Marcel,
    That is correct, QueryMap only operates on entries that are in-memory; there is no way to "query-through" a cachestore for example.
    Given that, I think that your proposed approach (of maintaining a separate b->a mapping) makes sense.
    thanks,
    -Rob

  • Am I correct, if I presume that Photo stream IS the iCloud storage device for photos?

    Is Photo-stream the iCloud storage device for photos?

    Photo Stream is a photo sync service between your computers and devices. That's not designed as a storage service, because photos uploaded to Photo Stream are deleted 30 days after adding them to it. If you want to see more limitations, see > http://support.apple.com/kb/HT4858?viewlocale=en_US&locale=en_US

  • Zfs-import-cache.service fails on startup

    I have been using zfs for a couple of days now.
    It's only for storage purposes, not for root system.
    I installed it, and enabled zfs.target for startup, but since a couple of reboots ago, I noticed zfs-import-cache.service fails resulting in sometimes, the pool not getting mounted.
    ● zfs-import-cache.service - Import ZFS pools by cache file
    Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; static)
    Active: failed (Result: exit-code) since sáb 2014-07-05 23:42:21 CEST; 44min ago
    Process: 174 ExecStart=/usr/bin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=1/FAILURE)
    Main PID: 174 (code=exited, status=1/FAILURE)
    jul 05 23:42:21 7thHeaven zpool[174]: Unable to open /dev/zfs: No such file or directory.
    jul 05 23:42:21 7thHeaven zpool[174]: Verify the ZFS module stack is loaded by running '/sbin/modprobe zfs'.
    jul 05 23:42:21 7thHeaven systemd[1]: zfs-import-cache.service: main process exited, code=exited, status=1/FAILURE
    jul 05 23:42:21 7thHeaven systemd[1]: Failed to start Import ZFS pools by cache file.
    jul 05 23:42:21 7thHeaven systemd[1]: Unit zfs-import-cache.service entered failed state.
    Any idea why would this happen?
    Thanks.

    Xi0N wrote:BTW, I'm trying to setup zed so it would send me an email on every event, but I don't seem to get it working. I installed and started postfix, but I still don't really know how to configure it so the mails get sent. Any tips?
    You need to create a script in /etc/zfs/zed.d/ - see the zed man page.
    Use the existing scripts in that dir for examples (e.g. scrub.finish-email.sh).
    The existing scripts use /etc/zfs/zed.d/zed.rc; in that you need to set “ZED_EMAIL” to the email address to mail to in order to make the existing mailer scripts do something. Your script could instead hard code the email address.
    cmtonkinson wrote:I've got a related problem I just posted about yesterday, I wonder if this busywait tactic would work in my scenario as well - although isn't there a way to do this with Require/After instead of an ExecStartPre wait script?
    If you look at zfs.target et al, you'll see they already use Require/After/Before to order things. Xi0N's problem is not the ordering, but that the kernel module hasn't always created the zfs device when systemd runs the .service.
    I use ZFS as my root, and the zfs hook runscript has a similar wait in it - so for me the zfs device exists by the time systemd runs the .service.
    I don't know if your problem is the slowness of the zfs kernel modules or something else. My first port of call would be to check the journal for errors.

  • Lightroom cannot open the catalog named "lightroom 4 good" located on volume "LIGHTROOM" because Lig

    Lightroom cannot open the catalog named “lightroom 4 good” located on volume “LIGHTROOM” because Lightroom cannot save changes to this location. This is what I am receiving even from my back-up
    Serial Number removed by: PECourtejoie

    You need to have writing permission where a LR catalog is stored.
    Even for just opening a catalog, since this database does not have a save-to-catalog command.
    Maybe it makes sense that you do not have them for a backup storage location in order to prevent accidental overwriting?
    If you agree, first copy the lrcat-file to your usual working folder where you have writing permission, maybe renaming it if that would overwrite you real working catalog.
    If you do not agree, remedy the writing permission for your user account at the backup spot.
    Cornelia

  • Assigning shared storage(LUNs) for SPARC T4-2  LDOMS

    Hi friends,
    I have a T4-2 server and need to create 2 LDOM's(solaris 11) in that for Oracle RAC installation. For this I have to assign shared(LUNs) storage from the zfs storage appliance for these 2 LDOM's. Could some one tell me how to do this(step by step) in LDOM.
    Thanks in advance
    Cheers..
    RK

    Depending how you configure your server on LDOM side, the procedure can be different. This depends also a little bit from the VM Server version.
    More details are available in the sections named "I/O Domain Overview" and "Using Virtual Disks" : http://www.oracle.com/technetwork/documentation/vm-sparc-194287.html
    Edited by: Pascal Kreyer - Oracle on May 5, 2013 9:29 AM

  • The shadow copy of volume E: were aborted because the shadow copy storage failed to grow

    Hello there, 
    I am facing an issue regarding DPM 2010 backup. 
    Every time my tape backup is being failed and the error is "The shadow copy of volume E: were aborted because the shadow copy storage failed to grow" .
    My protected server is an win 2008R2 File server which is getting backed up from DPM 2010 server. 
    The total space of E: is 700 GB out of which 750 MB space is free. 
    So i am quite confused how much space should be free in protected server drive for successful backup.
    It would be great if someone could help me regarding this matter. 
    Thanks in advance.
    Regards, Ishuv

    Hello, 
    Though it is written that minimum space required is 300 MB but i still do have 700 MB free out of 759 GB in my protected server drive (E:).
    The error detail is ;
    Source: Volsnap
    Event ID: 35
    Error General: "The Shadow copies of volume E: were aborted because the shadow copy storage failed to grow."
    The backup server is DPM 2010 attached with IBM tape library which have LTO 5 cartridge.
    Protected server is Win2k8R2 file server. 'E:' drive total space is 759 GB in which 700 MB is free.
    So , as per my understanding from the error above is VSS is unable to grow because of limited free space in the specific E drive. In Drive (E:) there is only 700 MB space free. Now my question is how much free space should be there in the protected server
    drive to complete the backup successfully by the DPM 2010 server which is storing the backups in IBM LTO5 tape library.
    Is there someone who had faced this issue. Please help!!
    Regards,
    Ishuv
    Regards, Ishuv

  • Storage Location for Trading Goods

    Dear Experts,
    I need to create a different Storage location for Trading Goods.
    Process Flow :GRs > Sales order > Deliver > Billing.
    Normally in Trading goods we don't do Delivery but in our system we required to create delivery.
    So please Give me some idea & Link to make it happen.
    Thanks & Regards,
    Olet Malla

    The nature of a trading good (material type HAWA) is to buy a material from external company, store it, and sell it to customers.
    The nature of a finished goods is produce it yourself, store it and then sell it to cusotmers.
    So why would one need to use a FERT and exchange the naming if HAWA just does what it is designed for?
    if you need an extra storage location for such a material type, then create one with OX09
    (please dont create logical storage locations, this just confuses your users, stick to the physical locations)

  • How do I manage (i.e., limit) storage space for Messages?

    How can the amount of storage space for Messages be limited and/or managed? Messages are currently taking up nearly 600MB of my iPhone and iPad devices storage space. Deleting old message threads does not appear to reduce it. And I don't consider myself to use messages excessively.

    How can the amount of storage space for Messages be limited and/or managed? Messages are currently taking up nearly 600MB of my iPhone and iPad devices storage space. Deleting old message threads does not appear to reduce it. And I don't consider myself to use messages excessively.

Maybe you are looking for

  • Align text in a textarea

    Greetings, Could anyone point me out in how to align a text in a textarea?, as in left/right/center ? Thank you

  • Fast Updates for 8 million records..!!

    Hi All, I was wondering is there any fast method for updating 8 million records out of 10 million table? For eg : I am having a customer table of 10m records and columns are cust_id, cust_num and cust_name. i need to update 8m records out of 10m cust

  • Distributed Installation of OBIEE 11G on Linux Box

    Hi, Can anyone share me the distributed installation screenshots sequence of OBIEE11G on Linux Box including LDAP configuration which helps me to take best practices in implementing the same ? Thanks and Regards, Sudarshan

  • Local Public School Wifi restriction on iOS devices

    Hello, I'm inquiring about the specific mechanisms that contribute to  why some non-restricted websites have been blocked from being accessed in practically every device at a local public school. I and my other peers currently have access to the inte

  • HP Officejet4630 All-in-One

    Orginal black in cartridge expired and I purchased two new black cartridges through HP. Twin-pack showing HP on the box with the number 61. The package the cartridges come in show the number CH561W HP 61 and so do the cartridges in side. I opened the