Problems Removing Large Amounts of Data
Hello Everyone,
First let me state that i know the difference between removing and deleting and i was removing.
So i have a huge lightroom library of about 300,000 images and i tried to remove 50% all at once.
Heres my process
1. Right click remove 150,000 images.
2. Wait 48 hours.
3. Lightroom locks up and takes over 11gb of memory. (I ran a sample process and that's whats posted below)
4. Force Quit.
5. Lightroom Library loads fine now.
6. Optimize Catalog
7. Wait 36 hours.
8. THE 150,000 IMAGES REAPPEAR IN THE CATALOG!!!!!!!
9. Currently... Repeat 1-3 and now waiting for lightroom to quit normally....
Can anyone at adobe read this sample and tell me what lightroom is doing? I suspect that if i force quit than all the removing i did will revert, is that correct? It's taking 11gb of 12gb of ram. Is it stuck in a loop? Is this a normal problem or a bug?
Sorry for the long sample data
Computer specs:
Imac, Intel i3, 3.06GHZ, 12gb of memory
Sample Data
Sampling process 193 for 3 seconds with 1 millisecond of run time between samples
Sampling completed, processing symbols...
Analysis of sampling Adobe Lightroom 3 (pid 193) every 1 millisecond
Call graph:
68 Thread_1252 DispatchQueue_1: com.apple.main-thread (serial)
68 start
68 main
68 NSApplicationMain
68 -[NSApplication run]
68 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:]
68 _DPSNextEvent
68 _NSHandleCarbonMenuEvent
68 _HandleMenuSelection2
68 MenuSelectCore(MenuData*, Point, double, unsigned int, OpaqueMenuRef**, unsigned short*)
68 FinishMenuSelection(SelectionData*, MenuResult*, MenuResult*)
68 SendMenuItemSelectedEvent
68 SendMenuCommandWithContextAndModifiers
68 SendHICommandEvent(unsigned int, HICommand const*, unsigned int, unsigned int, unsigned char, void const*, OpaqueEventTargetRef*, OpaqueEventTargetRef*, OpaqueEventRef**)
68 SendEventToEventTarget
68 SendEventToEventTargetInternal(OpaqueEventRef*, OpaqueEventTargetRef*, HandlerCallRec*)
68 DispatchEventToHandlers(EventTargetRec*, OpaqueEventRef*, HandlerCallRec*)
68 NSSLMMenuEventHandler
68 -[NSCarbonMenuImpl _carbonCommandProcessEvent:handlerCallRef:]
68 -[NSMenu _internalPerformActionForItemAtIndex:]
68 -[NSCarbonMenuImpl performActionWithHighlightingForItemAtIndex:]
68 -[NSMenuItem _corePerformAction]
68 -[NSApplication sendAction:to:from:]
68 -[NSApplication terminate:]
68 -[NSApplication _shouldTerminate]
68 -[NSDocumentController(NSInternal) _shouldTerminateWithDelegate:shouldTerminateSelector:]
68 -[NSDocumentController(NSInternal) _continueTerminationHavingClosedAllDocuments:context:]
68 -[NSApplication _docController:shouldTerminate:]
68 main
68 willAutoChange
68 AgShutdownNotification_appShouldTerminate
68 lua_call
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 luaL_error
68 lua_resume
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaL_newstate
68 AgDialogs_runModalL
68 AgColorUtils_convertHSLtoRGB_L
68 -[NSApplication runModalForWindow:]
68 -[NSApplication _realDoModalLoop:peek:]
68 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:]
68 _DPSNextEvent
68 BlockUntilNextEventMatchingListInMode
68 ReceiveNextEventCommon
68 RunCurrentEventLoopInMode
68 CFRunLoopRunSpecific
68 __CFRunLoopRun
68 __CFRunLoopDoSources0
68 __NSThreadPerformPerform
68 AgTransitQueue_enqueueToQueue
67 lua_pcall
67 lua_yield
67 luaL_buffinit
67 luaF_recfillpcbase
67 luaF_recfillpcbase
66 luaL_newstate
65 AgTransitQueue_enqueueToQueue
65 AgTransitQueue_enqueueToQueue
65 AgTransitValue_makeFromLuaState
65 AgTransitValue_push
65 lua_settable
65 luaL_loadstring
65 lua_yield
65 luaL_buffinit
1 luaopen_package
1 lua_pcall
1 lua_yield
1 luaL_buffinit
1 luaF_recfillpcbase
1 luaF_recfillpcbase
1 luaL_newstate
1 luaL_optlstring
1 luaL_loadstring
1 lua_checkstack
1 lua_checkstack
1 lua_checkstack
1 lua_yield
1 luaL_buffinit
1 szone_free_definite_size
1 tiny_free_list_remove_ptr
1 __compare_and_swap64
68 Thread_1259 DispatchQueue_2: com.apple.libdispatch-manager (serial)
68 start_wqthread
68 _pthread_wqthread
68 _dispatch_worker_thread2
68 _dispatch_queue_invoke
68 _dispatch_mgr_invoke
68 kevent
68 Thread_1265: Worker Thread Dispatcher
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
67 AgTransitQueue_enqueueToQueue
67 -[NSConditionLock lockWhenCondition:beforeDate:]
67 -[NSCondition waitUntilDate:]
67 _pthread_cond_wait
67 semaphore_timedwait_signal_trap
1 luaL_checkoption
1 lua_gc
1 lua_checkstack
1 lua_checkstack
1 lua_checkstack
1 lua_yield
1 luaL_buffinit
1 __spin_lock
68 Thread_1266: ace
68 thread_start
68 _pthread_start
68 PrivateMPEntryPoint
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 MPWaitOnQueue
68 TSWaitOnConditionTimedRelative
68 TSWaitOnCondition
68 _pthread_cond_wait
68 __semwait_signal
68 Thread_1267: ace
68 thread_start
68 _pthread_start
68 PrivateMPEntryPoint
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 MPWaitOnQueue
68 TSWaitOnConditionTimedRelative
68 TSWaitOnCondition
68 _pthread_cond_wait
68 __semwait_signal
68 Thread_1268: ace
68 thread_start
68 _pthread_start
68 PrivateMPEntryPoint
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 MPWaitOnQueue
68 TSWaitOnConditionTimedRelative
68 TSWaitOnCondition
68 _pthread_cond_wait
68 __semwait_signal
68 Thread_1269: cr_scratch
68 thread_start
68 _pthread_start
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 _pthread_cond_wait
68 __semwait_signal
68 Thread_1270
68 thread_start
68 _pthread_start
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 semaphore_wait_trap
68 Thread_1271
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 semaphore_wait_trap
68 Thread_1272
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 semaphore_wait_trap
68 Thread_1273
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 semaphore_wait_trap
68 Thread_1274
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 semaphore_wait_trap
68 Thread_1275: cr_negative_cache
68 thread_start
68 _pthread_start
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 private_load_AgColorProfile
68 _pthread_cond_wait
68 __semwait_signal
68 Thread_1277: Worker Thread
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgTransitQueue_enqueueToQueue
68 -[NSConditionLock lockWhenCondition:beforeDate:]
68 -[NSCondition waitUntilDate:]
68 _pthread_cond_wait
68 semaphore_timedwait_signal_trap
68 Thread_1736: AsyncDataServer
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 lua_checkstack
68 luaL_loadstring
68 Thread_1737: Queue Processor - AgPhotoAvailability
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 AgTransitQueue_enqueueToQueue
68 -[NSConditionLock lockWhenCondition:beforeDate:]
68 -[NSCondition waitUntilDate:]
68 _pthread_cond_wait
68 semaphore_timedwait_signal_trap
68 Thread_1739: Preview Server
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 private_load_AgEventLoopUtils
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 luaL_error
68 lua_resume
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaL_newstate
68 AgSQLiteStatement_pushNewToLua
68 sqlite3_step
68 sqlite3_prepare
68 sqlite3_result_int
68 sqlite3_result_null
68 sqlite3_result_null
68 sqlite3_result_null
68 sqlite3_result_null
68 sqlite3_thread_cleanup
68 read
68 Thread_1740: Queue Processor - PreviewFileStore
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 AgTransitQueue_enqueueToQueue
68 -[NSConditionLock lockWhenCondition:beforeDate:]
68 -[NSCondition waitUntilDate:]
68 _pthread_cond_wait
68 semaphore_timedwait_signal_trap
68 Thread_1744: Queue Processor - HistogramQueue
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 AgTransitQueue_enqueueToQueue
68 -[NSConditionLock lockWhenCondition:beforeDate:]
68 -[NSCondition waitUntilDate:]
68 _pthread_cond_wait
68 semaphore_timedwait_signal_trap
68 Thread_1745: Queue Processor - RenderQueue_ReadPreview
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 AgTransitQueue_enqueueToQueue
68 -[NSConditionLock lockWhenCondition:beforeDate:]
68 -[NSCondition waitUntilDate:]
68 _pthread_cond_wait
68 semaphore_timedwait_signal_trap
68 Thread_1746: Queue Processor - RenderQueue_Render
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 AgTransitQueue_enqueueToQueue
68 -[NSConditionLock lockWhenCondition:beforeDate:]
68 -[NSCondition waitUntilDate:]
68 _pthread_cond_wait
68 semaphore_timedwait_signal_trap
68 Thread_1747: Queue Processor - NegativeQueue
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 AgTransitQueue_enqueueToQueue
68 -[NSConditionLock lockWhenCondition:beforeDate:]
68 -[NSCondition waitUntilDate:]
68 _pthread_cond_wait
68 semaphore_timedwait_signal_trap
68 Thread_1789: Worker Thread
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
66 AgTransitQueue_enqueueToQueue
65 AgTransitQueue_enqueueToQueue
65 AgTransitValue_makeFromLuaState
65 AgTransitValue_push
64 AgTransitValueInternal_beginTransitState
64 AgMutex_lock
64 pthread_mutex_lock
64 semaphore_wait_signal_trap
1 lua_settable
1 luaL_loadstring
1 lua_yield
1 luaL_buffinit
1 -[NSObject(NSThreadPerformAdditions) performSelectorOnMainThread:withObject:waitUntilDone:]
1 __spin_lock
2 luaL_checkoption
2 lua_gc
1 luaL_loadstring
1 lua_checkstack
1 lua_checkstack
68 Thread_1790: Worker Thread
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgTransitQueue_enqueueToQueue
68 -[NSConditionLock lockWhenCondition:beforeDate:]
68 -[NSCondition waitUntilDate:]
68 _pthread_cond_wait
68 semaphore_timedwait_signal_trap
68 Thread_1791: Worker Thread
68 thread_start
68 _pthread_start
68 __NSThread__main__
68 AgTransitQueue_enqueueToQueue
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgLua_callWithAutoReleasePool
68 lua_call
68 luaF_recfillpcbase
68 luaL_newstate
68 luaopen_package
68 lua_pcall
68 lua_yield
68 luaL_buffinit
68 luaF_recfillpcbase
68 luaF_recfillpcbase
68 luaL_newstate
68 AgTransitQueue_enqueueToQueue
68 -[NSConditionLock lockWhenCondition:beforeDate:]
68 -[NSCondition waitUntilDate:]
68 _pthread_cond_wait
68 semaphore_timedwait_signal_trap
68 T
Because the transaction is so unusually large, SQLite3 and/or LR are hitting a performance cliff. When you force quit, the transaction never completed, so when you restart LR, SQLite3 restores the database to the state prior to the removal (as it should).
The obvious workaround to try: Remove 10K images at a time, 15 times.
Yeah, i think your right. The computer just can't handle that large of transaction. I guess the answer is to get a new computer. However, i'm still going to consider this a bug.
You should be able to remove as many files as you want without it shutting down your computer or needing a 12 core super computer. I might be wrong, but i don't think that's too much to ask for.
I just ended up creating a whole new catalog and adding only the folders i want, instead of pressing remove, waiting 3 hours and then pressing remove again 15 times. That seems unbearable.
Thanks for your comments everyone. I hope LR checks this out. Removing folders is a real pain, even when it's just a few. Theres got to be an easier and less processor intensive way to remove.
Similar Messages
-
Problem retrieving large amount of data!
Hi,
I'm currently working with a database application accessing very big amount of data. One query could result in 500 000+ hits!
I would like to present this data in a JTable.
When the query is executed, I create a two-dimensional array and store the data in it. The problem is that I get OutOfMemoryError when I reach the 150 000th row and add it to the array.
I've looked into the design pattern "Value List Handler" and it seems it could be of use. But still I need to populate an array with all the data, and then I get the error.
Is there somehow I could query the database, populate part of the data in a smaller array, use the "Value List Handler" pattern to access small portions of the complete resultset?
Another problem is that the user wants ability to sort asc/desc by clicking columnheaders in the JTable. The I need to access all data in that table to make it be sorted right. Could re-query the database with a "ORDER BY <column> ASC" and use a modification of the value list handler pattern?
I'm a bit confused, please help!
Kind regards, AndreasThe only chance that I you have: only select as many rows as you display on the screen. When the user hits "next page" retrieve the next rows.
You might be able to do this with a scrollable resultset, but with that you are left to the mercy of the JDBC driver concerning memory management.
So you need to think about a solution where you issue a new SELECT narrowing down the result set depending on the first/last row displayed
Search this forum for pagewise navigation this question gets asked about 5 times a week (mostly in conjunction with web applications, but the logic behind it, should be the same).
Tailoring your TableModel might be tricky as well.
Thomas -
Out.println() problems with large amount of data in jsp page
I have this kind of code in my jsp page:
out.clearBuffer();
out.println(myText); // size of myText is about 300 kbThe problem is that I manage to print the whole text only sometimes. Very often happens such that the receiving page gets only the first 40 kb and then the printing stops.
I have made such tests that I split the myText to smaller parts and out.print() them one by one:
Vector texts = splitTextToSmallerParts(myText);
for(int i = 0; i < texts.size(); i++) {
out.print(text.get(i));
out.flush();
}This produces the same kind of result. Sometimes all parts are printed but mostly only the first parts.
I have tried to increase the buffer size but neither that makes the printing reliable. Also I have tried with autoFlush="false" so that I flush before the buffer size gets overflowed; again same result, sometimes works sometimes don't.
Originally I use such a system where Visual Basic in Excel calls a jsp page. However, I don't think that this matters since the same problems occur if I use a browser.
If anyone knows something about problems with large jsp pages, I would appreciate that.Well, there are many ways you could do this, but it depends on what you are looking for.
For instance, generating an Excel Spreadsheet could be quite easy:
import javax.servlet.*;
import javax.servlet.http.*;
import java.io.*;
public class TableTest extends HttpServlet{
public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
response.setContentType("application/xls");
PrintWriter out = new PrintWriter(response.getOutputStream());
out.println("Col1\tCol2\tCol3\tCol4");
out.println("1\t2\t3\t4");
out.println("3\t1\t5\t7");
out.println("2\t9\t3\t3");
out.flush();
out.close();
}Just try this simple code, it works just fine... I used the same approach to generate a report of 30000 rows and 40 cols (more or less 5MB), so it should do the job for you.
Regards -
[SOLVED] Problem transferring large amounts of data to ntfs drive
I have about 150GB of files that I am trying to transfer from my home partition which is ext3 /dev/sdd2 to another partition which is ntfs mounted with ntfs-3g /dev/sde1 (both drives are internal sata). The files are sizes ranging from 200MB to 4GB. When the files start to move they transfer at a reasonable speed (10 to 60MB/s), but will randomly (usually after about 1 to 5GB transfered) slow down to about 500KB/s transfer speed. The computer becomes unusable at this point, and even if I cancel the transfer the computer will continue to be unusable (I must use alt+sysreq REISUB). I have tried transferring with dolphin, nautilus, and the mv command but they all will produce the same results. I have also tried this in dolphin as root with no change. If I leave dolphin running long enough I also get the message "the process for the file protocol died unexpectedly".
There is nothing that I can tell is wrong with the drives, I've run disk checks on both, and checked the S.M.A.R.T. readings for both disks and everything was fine.
My hardware is an intel X58 motherboard, core i7 processor, 6GB of RAM, 5 internal sata drives and 1 internal sata optical drive.
Another thing to note is that every once in a while I will get an error message at boot saying "Disabling IRQ #19".
This is driving me crazy as I have no idea why this is happening and when I search I can't find any solutions that work.
If anybody knows how to solve this or can help me diagnose the problem please help.
Thank you.
Last edited by zetskee (2011-01-07 21:29:58)Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
Also run Dolphin from terminal to try and see what's the problem.
Hope that help at least a bit.
Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands. -
DSS problems when publishing large amount of data fast
Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
My questions are
1. Is there any limit in speed (frequency) for data publishing in DSS?
2. Can DSS be unstable if loaded to much?
3. Can I lose/miss data in any situation?
4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
Regards
Idriz Zogaj
Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
Memory Profesional
direct: +46 (0) - 734 32 00 10
http://www.zogaj.seLuI wrote:
>
> Hi all,
>
> I am frustrated on VISA serial comm. It looks so neat and its
> fantastic what it supposes to do for a develloper, but sometimes one
> runs into trouble very deep.
> I have an app where I have to read large amounts of data streamed by
> 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
> same time.)
> I use either a Moxa multiport adapter C320 with 16 serial ports or -
> for test purposes - a Keyspan serial-2-USB adapter with 4 serial
> ports.
Does it work better if you use the serial port(s) on your motherboard?
If so, then get a better serial adapter. If not, look more closely at
VISA.
Some programs have some issues on serial adapters but run fine on a
regular serial port. We've had that problem recent
ly.
Best, Mark -
Deleting large amounts of data
All,
I have several tables that have about 1 million plus rows of historical data that is no longer needed and I am considering deleting the data. I have heard that deleting the data will actually slow down performance as it will mess up the indexing, is this true? What if I recalculate statistics after deleting the data? In general, I am looking for advice what is best practices for deleting large amounts of data from tables.
For everyones reference I am running Oracle 9.2.0.1.0 on Solaris 9. Thanks in advance for the advice.
Thanks in advance!
RonAnother problem with delete is that it generates a vast amount of redo log (and archived logs) information . The better way to get rid of the unneeded data would be to use TRUNCATE command:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_107a.htm#2067573
The problem with truncate that it removes all the data from the table. In order to save some data from the table you can do next thing:
1. create another_table as select * from <main_table> where <data you want to keep clause>
2. save the indexes, constraints, trigger definitions, grants from the main_table
3. drop the main table
4. rename <stage_table> to <main_table>.
5. recreate indexes, constraints and triggers.
Another method is to use partitioning to partition the data based on the key (you've mentioned "historical" - the key could be some date column). Then you can drop the historical data partitions when you need it.
As far as your question about recalculating the statistics - it will not release the storage allocated for index. You'll need to execute ALTER INDEX <index_name> REBUILD :
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_18a.htm
Mike -
Bex Report Designer - Large amount of data issue
Hi Experts,
I am trying to execute (on Portal) report made in BEx Report Designer, with about 30 000 pages, and the only thing I am getting is a blank page. Everything works fine at about 3000 pages. Do I need to set something to allow processing such large amount of data?
Regards
VladimirHi Sauro,
I have not seen this behavior, but it has been a while since I tried to send an input schedule that large. I think the last time was on a BPC NW 7.0 SP06 system and it worked OK. If you are on a recent support package, then you should search for relevant notes (none come to mind for me, but searching yourself is always a good idea) and if you don't find one then you should open a support message with SAP, with very specific instructions for recreating the problem from a clean input-schedule.
Good luck,
Ethan -
Looking for ideas for transferring large amounts of data between systems
Hello,
I am looking for ideas based on best practices for transferring Large Amounts of Data in and out of a Netweaver based application.
We have a new system we are developing in Netweaver that will utilize both the Java and ABAP stack, and will require integration with other SAP and 3rd Party Systems. It is a standalone product that doesn't share any form of data store with other systems.
We need to be able to support 10s of millions of records of tabular data coming in and out of our system.
Since we need to integrate with so many different systems, we are planning to use RFC for our primary interface in and out of the system. As it turns out RFC is not good at dealing with this large amount of data being pushed through a single call.
We have considered a number of possible ideas, however we are not very happy with any of them. I would like to see what the community has done in the past to solve problems like this as well as how SAP currently solves this problem in other applications like XI, BI, ERP, etc.Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
Also run Dolphin from terminal to try and see what's the problem.
Hope that help at least a bit.
Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands. -
Freeze when writing large amount of data to iPod through USB
I used to take backups of my PowerBook to my 60G iPod video. Backups are taken with tar in terminal directly to mounted iPod volume.
Now, every time I try to write a big amount of data to iPod (from MacBook Pro), the whole system freezes (mouse cursor moves, but nothing else can be done). When the USB-cable is pulled off, the system recovers and acts as it should. This problem happens every time a large amount of data is written to iPod.
The same iPod works perfectly (when backupping) in PowerBook and small amounts of data can be easily written to it (in MacBook Pro) without problems.
Does anyone else have the same problem? Any ideas why is this and how to resolve the issue?
MacBook Pro, 2.0Ghz, 100GB 7200RPM, 1GB Ram Mac OS X (10.4.5) IPod Video 60G connected through USBEx PC user...never had a problem.
Got a MacBook Pro last week...having the same issues...and this is now with an exchanged machine!
I've read elsewhere that it's something to do with the USB timing out. And if you get a new USB port and attach it (and it's powered separately), it should work. Kind of a bummer, but, those folks who tried it say it works.
Me, I can upload to Ipod piecemeal, manually...but even then, it sometimes freezes.
The good news is that once the Ipod is loaded, the problem shouldnt' happen. It's the large amounts of data.
Apple should DEFINITELY fix this though. Unbelievable.
MacBook Pro 2.0 Mac OS X (10.4.6) -
Uploading of large amount of data
Hi all,
i really hope you can help me. I have to upload quite large amount of data from flat files to ODS (via PSA of course). But the process takes very long time. I used method of loadin to PSA and then packet by packet into ODS. Loading of cca 1.300.000 lines from flat file takes about 6 or more hours. It seems strange for me. Is it normal or not?? Or should I use another uploading method or set up ODS some way ?? thankshi jj,
welcome to the SDN!
in my limited experience, 6hrs for 1.3M records is a bit too long. here are some things you could try and look into:
- load from the application server, not from the client computer (meaning, move your file to the server where BW is running, to minimize network traffic).
- check your transfer rules and any customer exits related to loading, as the smallest performance-inefficient bits of code can cause a lot of problems.
- check the size of data packets you're transmitting, as it could also cause problems, via tcode RSCUSTA2 (i think, but i'm not 100% sure).
hope ths helps you out - please remember to give out points as a way of saying thanks to those that help you out okay? =)
ryan. -
About5-6 days ago store agent started to continuously run, sending and receiving large amounts of data. This eats all my bandwidth quickly, essentially rendering my internet access worthless since I have to use satellite internet. I have tried stopping it in the Activity Monitor , but it restarts again. I thought I might have had a virus or something. I downloaded trend micro for Mac, but found its core services essentially did the same thing. I uninstalled, but found that store agent is still running non stop. Ideas?
The storeagent process is a normal part of Mac OS X, not a virus. Remove Trend Micro, which is a quite poor choice for protecting yourself against malware in the first place (see the results of my Mac anti-virus testing 2014), and which isn't really necessary anyway (see my Mac Malware Guide).
As for what it might be doing, as babowa points out, it should be present when the App Store app is open, and at that time, it might be occupied with downloading updates or something similar. If you keep force-quitting it in Activity Monitor, that probably ruins whatever download it was working on, so it has to start all over again, perpetuating the cycle. In general, it is a very bad idea to force-quit processes that are part of Mac OS X without a very good reason and an understanding of what they are.
Go to System Preferences -> App Store:
You will probably want to turn off automatic download of newly available updates, as well as automatic download of apps purchased on other Macs (if you have other Macs). I do not advise turning off the master "Automatically check for updates" box, or the one for installing security updates, as disabling those will reduce the security of your system. These security updates are typically small, so they should have very little impact on your total internet usage. -
Streaming large amounts of data of socket causes corruption?
I'm wrinting an app to transfer large amounts of data via a simple client/server architecture between two machines.
Problem: If I send the data too 'fast', the data arrives corrupted:
- Calls to read() returns wrong data (wrong 'crc')
- Subsequent calls to read() do not return -1 but allow me to read e.g. another 60 or 80 KBytes.
- available() returns always '0'; but I'll get rid of that method anyway (as recommended in other forum entries).
The behaviour is somewhat difficult to repeat, but it fails for me reliably when transferring the data between two separate machines and when setting the number of packets (Sender.TM) to 1000 or larger.
Workaround: Reduce number of packages send to e.g. 1; or intruduce the 'sleep' on the Sender side. Another workaround: Changing alone to java.nio.* did not help, but when I got rid of the Streams and used solely ByteBuffers, the problem disappeared. Unfortunately the Streams are required by other parts of my application.
I'm running the code on two dual-CPU machines connected via
Below are the code of the Sender and the Listener. Please excuse the style as this is only to demonstrate the problem.
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.nio.channels.Channels;
import java.nio.channels.SocketChannel;
import java.util.Arrays;
public class SenderBugStreams {
public static void main(String[] args) throws IOException {
InetSocketAddress targetAdr = new InetSocketAddress(args[0], Listener.DEFAULT_PORT);
System.out.println("connecting to: "+targetAdr);
SocketChannel socket = SocketChannel.open(targetAdr);
sendData(socket);
socket.close();
System.out.println("Finished.");
static final int TM = 10000;
static final int TM_SIZE = 1000;
static final int CRC = 2;
static int k = 5;
private static void sendData(SocketChannel socket) throws IOException {
OutputStream out = Channels.newOutputStream(socket);
byte[] ba = new byte[TM_SIZE];
Arrays.fill(ba, (byte)(k++ % 127));
System.out.println("Sending..."+k);
for (int i = 0; i < TM; i++) {
out.write(ba);
// try {
// Thread.sleep(10);
// } catch (InterruptedException e) {
// // TODO Auto-generated catch block
// e.printStackTrace();
// throw new RuntimeException(e);
out.write(CRC);
out.flush();
out.close();
import java.io.IOException;
import java.io.InputStream;
import java.net.InetSocketAddress;
import java.nio.channels.Channels;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
public class ListenerBugStreams {
static int DEFAULT_PORT = 44521;
* @param args
* @throws IOException
public static void main(String[] args) throws IOException {
ServerSocketChannel serverChannel = ServerSocketChannel.open();
serverChannel.socket().bind(new InetSocketAddress(DEFAULT_PORT));
System.out.print("Waiting...");
SocketChannel clientSocket = serverChannel.accept();
System.out.println(" starting, IP=" + clientSocket.socket().getInetAddress() +
", Port="+clientSocket.socket().getLocalPort());
//read data from socket
readData(clientSocket);
clientSocket.close();
serverChannel.close();
System.out.println("Closed.");
private static void readData(SocketChannel clientSocket) throws IOException {
InputStream in = Channels.newInputStream(clientSocket);
//read and ingest objects
byte[] ba = null;
for (int i = 0; i < SenderBugStreams.TM; i++) {
ba = new byte[SenderBugStreams.TM_SIZE];
in.read(ba);
System.out.print("*");
//verify checksum
int crcIn = in.read();
if (SenderBugStreams.CRC != crcIn) {
System.out.println("ERROR: Invalid checksum: "+SenderBugStreams.CRC+"/"+crcIn);
System.out.println(ba[0]);
int x = in.read();
int remaining = 0;
while (x != -1) {
remaining++;
x = in.read();
System.out.println("Remaining:"+in.available()+"/"+remaining);
System.out.println(" "+SenderBug.TM+" objects ingested.");
in.close();
}Here is your trouble:
in.read(ba);read(byte[]) does not read N bytes, it reads up to N bytes. If one byte has arrived then it reads and returns that one byte. You always need to check the return value of read(byte[]) to see how much you got (also check for EOF). TCP chops up the written data to whatever packets it feels like and that makes read(byte[]) pretty random.
You can use DataInputStream which has a readFully() method; it loops calling read() until it gets the full buffer's worth. Or you can write a little static utility readFully() like so:
// Returns false if hits EOF immediately. Otherwise reads the full buffer's
// worth. If encounters EOF in mid-packet throws an IOException.
public static boolean readFully(InputStream in, byte buf[])
throws IOException
return readFully(in, buf, 0, buf.length);
public static boolean readFully(InputStream in, byte buf[], int pos, int len)
throws IOException
int got_total = 0;
while (got_total < len) {
int got = in.read(buf, pos + got_total, len - got_total);
if (got == -1) {
if (got_total == 0)
return false;
throw new EOFException("readFully: end of file; expected " +
len + " bytes, got only " + got_total);
got_total += got;
return true;
} -
Hello fellow Java fans
First, let me point out that I'm a big Java and Linux fan, but somehow I ended up working with .NET and Microsoft.
Right now my software development team is working on a web tool for a very important microchips manufacturer company. This tool handles big amounts of data; some of our online reports generates more that 100.000 rows which needs to be displayed on a web client such as Internet Explorer.
We make use of Infragistics, which is a set of controls for .NET. Infragistics allows me to load data fetched from a database on a control they call UltraWebGrid.
Our problem comes up when we load large amounts of data on the UltraWebGrid, sometimes we have to load 100.000+ rows; during this loading our IIS server memory gets killed and could take up to 5 minutes for the server to end processing and display the 100.000+ row report. We already proved the database server (SQL Server) is not the problem, our problem is the IIS web server.
Our team is now considering migrating this web tool to Java and JSP. Can you all help me with some links, information, or past experiences you all have had with loading and displaying large amounts of data like the ones we handle on JSP? Help will be greatly appreciated.Who in the world actually looks at a 100,000 row report?
Anyway if I were you and I had to do it because some clueless management person decided it was a good idea... I would write a program in something that once a day, week, year or whatever your time period produced the report (in maybe a PDF fashion but you could do it in HTML if you really must have it that way) and have it as a static file that you link to from your app.
Then the user will have to just wait while it downloads but the webserver or web applications server will not be bogged down trying to produce that monstrosity. -
MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.
I Have the same problem, I switched off location services, maps in data, whatever else maps could be involved in nd then just last nite it chewed 100mb... I'm also on vodacom so I'm seeing a pattern here somehow. Siri was switched on however so I switched it off now nd will see what happens. but I'm gonna go into both apple and vodacom this afternoon because this must be sorted out its a serious issue we have on our hands and some uproar needs to be made against those responsible!
-
What java collection for large amount of data and user customizable record
I'm trying to write an application which operates on large amount of data. I want user could customize data structure (record) from different types of variables(float,int,bool,string,enums). These records should be stored in some kind of Array. Size of record: 1-200 variables; size of Array of those records: about 100000 items (one record every second through whole day). I want these data stored in some embedded database (sqlite, hsqldb) - access using simple JDBC. Could you give me some advise how to design thoses data strucures. Sincerely yours :)
Ok, maybe I give some example. This will be some C++ code.
I made an interface:
class ParamI {
virtual string toString() = 0;
virtual void addValue( ParamI * ) = 0;
virtual void setValue( ParamI * ) = 0;
virtual BYTE getType() = 0;
Than I made some template class derived from interface ParamI:
template <class T>
class CParam : CParamI {
public:
void setValue( T val );
T getValue();
string toString();
void setValue( ParamI *src ) {
if ( itemType == src->getType() ) {
CParam<T> ptr = (CParam<T>)src;
value = ptr->value;
private:
BYTE itemType;
T value;
sample constructor of <int> template:
template<> CParam<int>::CParam() {
itemType = ParamType::INTEGER;
This solution makes me possible to write collection of CParamI:
std::vector<CParamI*> myCollection;
CParam<int> *pi = new CParam<int>();
pi->setValue(10);
myCollection.push_back((CParamI*)pi);
Is this correct solution?. My main problem is to get data from the collection. I have to check its data type using getType() method of CParamI interface.
Please could give me some advise, some idea to make it right using java.If you have the requirement that you have to be able to configure on the fly, then what I've done in the past is just put everything into data pairs into a list: something along the line of: (<Vector>, <String>), where the Vector would store your data and String would contain a data type. I would then make a checker to validate the input according to the SQL databypes that I want to support on the project. It's not a big deal with the amount of data you are talking about.
The problem you're going to have is when you try to allow dynamic definition, on the fly, of data being input to a table that has already been defined. Your DB will not support that, unless you just store that data pair--which I do not suggest.
Maybe you are looking for
-
RAM preview doesn't save when using multiprocessing
I'm running After Effects CC on a MacPro 2013. When I normally render a RAM preview, and I finish playing it, the green bar stays. When I switch to the multiprocessing feature for RAM previews, after I finish playing it, the green bar disappears. I h
-
IPad 2 + AV Adaptor + older TV with VGA input. Can I mirror?
I have an iPad 2 (iOS 5.1), the Apple AV Adaptor and an older tv with VGA input. Option A Can I mirror to the tv using a HDMI to VGA cable - something like this or this ? Would this work? Option B Or do I totally ignore the AV Adaptor and buy the App
-
What is a good reference to really know what every mac is about?
I would like to run at optimum speed, I do not know what to reference to show me how to remove unwanted files and leftovers program stuff. I use my computer for editing video and photo and its brand new 1mac can someone help.
-
Synchronization failed using sample function group SDOE_SAMPLE_BAPIWRAPPERS
Hi all, I'm using SAP Netweaver Mobile 7.1. I've created an application using a BAPI wrapper with the BAPI of the sample function group 'SDOE_SAMPLE_BAPIWRAPPERS'. Everithing works fine. I'm able to create locally a new contact with its related addre
-
Catbundle.sql in standby database
Hi Just thought of checking with you all , I am trying to install the Jan 2012 patch . on a rac database which has standby as well . Just not sure do i need to run the catbundle and the utlrep on the standby database after patching and before the log