ARDAgent behaving strangely, terminating on logout, memory errors

Hi,
I got some problems with ARDAgent on our Apple XServe Intel Quadcore.
At this time, I'm using Apple Remote Desktop 3.2.2. The server's OS version is 10.5.7. The problems present as follows:
When logging on to the server and logging the user out again, ARDAgent terminates. I have to start it again from command line. If I do not restart it manually, it will not respond to Apple Remote Desktop, the server appears offline.
Sometimes, ARDAgent will just freeze up. If I close the remote desktop window and open another, I often get the effect that all of my mouse clicks are doubled. When I log out and back on after that, things mostly work normal.
Last weekend, the server completely locked up. The only unusual entries in the logs afaic are those:
7/25/09 12:00:04 AM com.apple.RemoteDesktop.agent[3516] ARDAgent(3516,0xb0e2c000) malloc: * error for object 0x38363730: Non-aligned pointer being freed (2)
7/25/09 12:00:04 AM com.apple.RemoteDesktop.agent[3516] ARDAgent(3516,0xb0e2c000) malloc: * error for object 0x3232: Non-aligned pointer being freed
7/25/09 12:00:04 AM com.apple.RemoteDesktop.agent[3516] ARDAgent(3516,0xb0e2c000) malloc: * error for object 0x534e5f00: Non-aligned pointer being freed (2)
7/25/09 12:00:04 AM com.apple.RemoteDesktop.agent[3516] ARDAgent(3516,0xb0e2c000) malloc: * error for object 0x756c6156: Non-aligned pointer being freed
7/25/09 12:00:04 AM com.apple.RemoteDesktop.agent[3516] ARDAgent(3516,0xb0e2c000) malloc: * error for object 0xc040b80f: Non-aligned pointer being freed
7/25/09 12:00:04 AM com.apple.RemoteDesktop.agent[3516] ARDAgent(3516,0xb0e2c000) malloc: * error for object 0x6e694265: Non-aligned pointer being freed
There have been many hundred entries like this. I don't recall noticing them before. I do not know, whether those issues are in some way related.
All help much appreciated!

You might try removing the ARD client using the instructions here and then reinstalling.
Hope that helps.

Similar Messages

  • JavaScript Out of Memory Error on Portal timeout.

    Hello All,
    I am using jsf and Inline navigation in all our portlets and when user leave the browser idle for portal timeout we have 2 problems. 1: Login portlet shows in that specific portlet. 2: we get a javascript alert saying out of memory at line 40. and the porltet shows error message as "Gateway was not able to access requested content. If the error persists, contact your portal Administrator."
    We are using Plumtree 5.0.4 Java version.
    any help is highly appreciated.
    Thanks
    A.J.

    Both are valid behaviors unfortunately.
    1) login portlet is showing up in specific portlet b/c inline navigation allows for you to create and load pages without affecting the overall portal.
    This happens when you use iframes (which behave in a similar fashion).
    - your only workaround is really to write some javascript function to "listen" to the portal login page getting loaded and then throwing the session into the parent browser (which is Portal). At least this is the only solution that I ever came up with when using Iframes.
    2) Don't know about out of memory error actually, but getting the "gatewy was not able to access requested content" is valid b/c the session died.
    - javascript errors require javascript solutions. Sorry I couldn't be more helpful than that.
    Maybe someone else will have better suggestions.
    The other suggestion is to use your app server to listen to the logout event and redirect appropriately to somewhere else, or have it do what you want it to do in situations as this.

  • PMON: terminating instance due to error 476

    i met an error when the database was open,and then the database was down.
    i looked for the alert log and the correspoding trace file,there is some information in the two files.
    the content in the alert log are:
    PMON: terminating instance due to error 476
    Instance terminated by PMON, pid = 2999
    the content in the trace file are:
    /export/home/oracle/admin/orcl/bdump/orcl_pmon_2999.trc
    Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
    With the Partitioning option
    JServer Release 8.1.6.0.0 - Production
    ORACLE_HOME = /export/home/oracle
    System name: SunOS
    Node name: KFDB2
    Release: 5.7
    Version: Generic_106541-15
    Machine: sun4u
    Instance name: orcl
    Redo thread mounted by this instance: 1
    Oracle process number: 2
    Unix process pid: 2999, image: oracle@KFDB2 (PMON)
    *** 2003-03-10 11:14:12.217
    *** SESSION ID:(1.1) 2003-03-10 11:14:12.025
    error 476 detected in background process
    how can i resolve the problem.
    thanks

    This particular problem is because of continous write from memory(SGA) to HDD.
    If you are using MTS then disable it and see the results.
    I faced the same problem on compaq server with SCO Unix but we have RAID is enabled.
    As we disabled the RAID then the above error disappers.
    So, if you have RAID configured then disable it and see the results.
    Disable also the parallel and partion option.
    Regards
    Nikhil Wani
    Vadodara

  • ERROR [B3108]: Unrecoverable out of memory error during a cluster operation

    We are using Sun Java(tm) System Message Queue Version: 3.5 SP1 (Build 48-G). We are using two JMS servers as a cluster.
    But we frequently getting the out of memory issue during the cluster operation.
    Messages also got queued up in the Topics. Eventhough listeners have the capability to reconnect with the Server after the broker restarting, usually we are restarting consumer instances to get work this.
    Here is detailed log :
    Jan 5 13:45:40 polar1-18.eastern.com imqbrokerd_cns-jms-18[8980]: [ID 478930 daemon.error] ERROR [B3108]: Unrecoverable out of memory error during a cluster operation. Shutting down the broker.
    Jan 5 13:45:57 polar1-18.eastern18.chntva1-dc1.cscehub.com imqbrokerd: [ID 702911 daemon.notice] Message Queue broker terminated abnormally -- restarting.
    Expecting your attention on this.
    Thanks

    Hi,
    If you do not use any special cmdline options, how do you configure your servers/
    brokers to 1 Gb or 2 Gb JVM heap ?
    Regarding your question on why the consumers appear to be connecting to just
    one of the brokers -
    How are the connection factories that the consumers use configured ?
    Is the connection factory configured using the imqAddressList and
    imqAddressListBehavior attributes ? Documentation for this is at:
    http://docs.sun.com/source/819-2571/ref_adminobj_props.html#wp62463
    imqAddressList should contain a list of brokers (i.e. 2 for you) in the cluster
    e.g.
    mq://server1:7676/jms,mq://server2:7676/jms
    imqAddressListBehavior defines how the 2 brokers in the above list are picked.
    The default is in the order of the list - so mq://server1:7676/jms will always be
    picked by default. If you want random behavior (which will hopefully even out the
    load), set imqAddressListBehavior to RANDOM.
    regards,
    -i
    http://www.sun.com/software/products/message_queue/index.xml

  • Memory errors in alert log

    Working on following environment:
    Platform -> Windows Server 2003 Version V5.2 Service Pack 2 32-bit
    Oracle Database -> 10.2.0.1
    My database was going to shutdown(auto) after 5 minutes of database startup. When I investigate alert log got following messages in it:
    Memory Notification: Library Cache Object loaded into SGA
    Heap size 2210K exceeds notification threshold (2048K)
    KGL object name :XDB.XDbD/PLZ01TcHgNAgAIIegtw==
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_ckpt_7024.trc:
    ORA-04030: out of process memory when trying to allocate 8716 bytes (pga heap,Get krha asynch mem)
    CKPT: terminating instance due to error 4030
    Mon Mar 14 11:05:30 2011
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_q001_5816.trc:
    ORA-04030: out of process memory when trying to allocate bytes (,)
    Then I follow metalink note 330239.1 and got the issue resolved related to "shutdown" but getting some new error messages in alert log. Please see the below error messages:
    Thread 1 cannot allocate new log, sequence 77933
    Private strand flush not complete
    Current log# 2 seq# 77932 mem# 0: D:\ORACLE10G\PRODUCT\10.2.0\ORADATA\NDB\REDO0_02.LOG
    Thread 1 advanced to log sequence 77933
    Current log# 1 seq# 77933 mem# 0: D:\ORACLE10G\PRODUCT\10.2.0\ORADATA\NDB\REDO0_01.LOG
    Mon Mar 14 12:34:08 2011
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_smon_7300.trc:
    ORA-00604: error occurred at recursive SQL level 2
    ORA-04030: out of process memory when trying to allocate 404 bytes (Typecheck,seg:kggfaAllocSeg)
    Mon Mar 14 12:49:00 2011
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_j000_7060.trc:
    ORA-12012: error on auto execute of job 27
    ORA-04030: out of process memory when trying to allocate 16428 bytes (pga heap,kgh stack)
    Mon Mar 14 12:49:34 2011
    Process startup failed, error stack:
    Mon Mar 14 12:49:35 2011
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_psp0_6908.trc:
    ORA-27300: OS system dependent operation:spcdr:9261:4200 failed with status: 997
    ORA-27301: OS failure message: Overlapped I/O operation is in progress.
    ORA-27302: failure occurred at: skgpspawn
    Mon Mar 14 12:49:35 2011
    Process J001 died, see its trace file
    Mon Mar 14 12:49:35 2011
    kkjcre1p: unable to spawn jobq slave process
    Mon Mar 14 12:49:36 2011
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_cjq0_6280.trc:
    Mon Mar 14 12:53:51 2011
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_j000_7060.trc:
    ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [unable_to_trans_pc] [PC:0x603F1A55] [ADDR:0xBB] [UNABLE_TO_READ] []
    Mon Mar 14 12:53:53 2011
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_j000_7060.trc:
    ORA-04030: out of process memory when trying to allocate 753120 bytes (pga heap,kco buffer)
    ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [unable_to_trans_pc] [PC:0x603F1A55] [ADDR:0xBB] [UNABLE_TO_READ] []
    Mon Mar 14 12:54:53 2011
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_q000_5424.trc:
    ORA-04030: out of process memory when trying to allocate 123404 bytes (QERHJ hash-joi,kllcqas:kllsltba)
    Mon Mar 14 13:21:24 2011
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_mmon_7444.trc:
    ORA-00600: internal error code, arguments: [kspcsetsp3], [], [], [], [], [], [], []
    Mon Mar 14 13:21:27 2011
    Errors in file d:\oracle10g\product\10.2.0\admin\ndb\bdump\ndb_mmon_7444.trc:
    ORA-00600: internal error code, arguments: [kmgs_parameter_update_timeout_1], [600], [], [], [], [], [], []
    ORA-00600: internal error code, arguments: [kspcsetsp3], [], [], [], [], [], [], []
    Mon Mar 14 13:22:26 2011
    Restarting dead background process MMON
    MMON started with pid=11, OS id=7304
    Mon Mar 14 13:44:51 2011
    Thread 1 advanced to log sequence 77934
    Current log# 3 seq# 77934 mem# 0: D:\ORACLE10G\PRODUCT\10.2.0\ORADATA\NDB\REDO0_03.LOG
    Mon Mar 14 13:51:20 2011
    Thread 1 advanced to log sequence 77935
    Current log# 2 seq# 77935 mem# 0: D:\ORACLE10G\PRODUCT\10.2.0\ORADATA\NDB\REDO0_02.LOG
    kindly help me out in this, as this is PRODUCTION database.
    Regards,

    Please see the parameter values extracted from the alert log:
    Adjusting the default value of parameter parallel_max_servers
    from 160 to 135 due to the value of parameter processes (150)
    Mon Mar 14 12:18:53 2011
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =18
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.1.0.
    System parameters with non-default values:
    processes = 150
    sga_max_size = 1577058304
    __shared_pool_size = 125829120
    __large_pool_size = 8388608
    __java_pool_size = 8388608
    __streams_pool_size = 0
    sga_target = 1258291200
    control_files = D:\ORACLE10G\PRODUCT\10.2.0\ORADATA\NDB\CONTROL01.CTL, D:\ORACLE10G\PRODUCT\10.2.0\ORADATA\NDB\CONTROL02.CTL, D:\ORACLE10G\PRODUCT\10.2.0\ORADATA\NDB\CONTROL03.CTL
    db_block_size = 8192
    __db_cache_size = 1107296256
    compatible = 10.2.0.1.0
    db_files = 600
    db_file_multiblock_read_count= 16
    db_recovery_file_dest = d:\oracle10g\product\10.2.0/flash_recovery_area
    db_recovery_file_dest_size= 2147483648
    undo_management = AUTO
    undo_tablespace = UNDOTBS2
    kgllarge_heap_warning_threshold= 8388608
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=ndbXDB)
    job_queue_processes = 10
    audit_file_dest = D:\ORACLE10G\PRODUCT\10.2.0\ADMIN\NDB\ADUMP
    background_dump_dest = D:\ORACLE10G\PRODUCT\10.2.0\ADMIN\NDB\BDUMP
    user_dump_dest = D:\ORACLE10G\PRODUCT\10.2.0\ADMIN\NDB\UDUMP
    core_dump_dest = D:\ORACLE10G\PRODUCT\10.2.0\ADMIN\NDB\CDUMP
    db_name = ndb
    open_cursors = 300
    pga_aggregate_target = 838860800
    Regards,
    Edited by: user12194837 on Mar 14, 2011 3:55 AM

  • Terminating instance due to error -  CKPT process terminated with error

    Hi Folks !
    i have a problem with my oracle 10g it works under an debain 3.1. In my alert.log i have see an error-message, the database terminating.
    Thu Feb 15 13:01:21 2007
    Errors in file /u01/app/oracle/admin/tlp/bdump/tlp_pmon_9689.trc:
    ORA-00469: CKPT process terminated with error
    Thu Feb 15 13:01:21 2007
    PMON: terminating instance due to error 469
    Instance terminated by PMON, pid = 9689
    A look into the trace file:
    /u01/app/oracle/admin/tlp/bdump/tlp_pmon_9689.trc
    Oracle Database 10g Release 10.2.0.1.0 - Production
    ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
    System name: Linux
    Node name: hyperion
    Release: 2.6.18.1-vs2.1.1-rc48-vserver
    Version: #1 SMP PREEMPT Wed Nov 15 16:53:41 CET 2006
    Machine: i686
    Instance name: tlp
    Redo thread mounted by this instance: 1
    Oracle process number: 2
    Unix process pid: 9689, image: oracle@hyperion (PMON)
    *** 2007-02-15 13:01:21.813
    *** SERVICE NAME:(SYS$BACKGROUND) 2007-02-15 13:01:21.813
    *** SESSION ID:(170.1) 2007-02-15 13:01:21.813
    Background process CKPT found dead
    Oracle pid = 7
    OS pid (from detached process) = 9700
    OS pid (from process state) = 9700
    dtp = 0x2000c9ac, proc = 0x672b14c4
    Dump of memory from 0x2000C9AC to 0x2000C9D8
    2000C9A0 00000076 [v...]
    2000C9B0 672B14C4 00000000 00000000 54504B43 [..+g........CKPT]
    2000C9C0 00000200 000025E4 0A8BA210 00000001 [.....%..........]
    2000C9D0 70F2A67F 00040081 [...p....]
    Dump of memory from 0x672B14C4 to 0x672B1A78
    672B14C0 00000102 00000000 00000000 [............]
    672B14D0 00000000 00000000 65621FF8 67825F00 [..........be._.g]
    672B14E0 673C0034 678250A4 00000000 67825108 [4.<g.P.g.....Q.g]
    672B14F0 67825108 67825EF4 01000601 673AB4A0 [.Q.g.^.g......:g]
    672B1500 673C0034 00000007 00000000 00000000 [4.<g............]
    672B1510 00000000 674CB40C 674CC04C 00000000 [......LgL.Lg....]
    672B1520 00000000 00000000 00000000 00000000 [................]
    Repeat 3 times
    672B1A70 00000003 00000005 [........]
    error 469 detected in background process
    ORA-00469: CKPT process terminated with error
    What can i do ? Have you any ideas to fix the problem ?
    Problem 2:
    After the error message i start my server and see in my alert.log:
    Starting background process QMNC
    QMNC started with pid=17, OS id=17164
    Thu Feb 15 14:33:45 2007
    Errors in file /u01/app/oracle/admin/tlp/udump/tlp_ora_17122.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-12663: Services required by client not available on the server
    ORA-36961: Oracle OLAP is not available.
    ORA-06512: at "SYS.OLAPIHISTORYRETENTION", line 1
    ORA-06512: at line 15
    Thu Feb 15 14:33:45 2007
    db_recovery_file_dest_size of 5120 MB is 0.00% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Thu Feb 15 14:33:46 2007
    Completed: ALTER DATABASE OPEN
    hyperion:/u01/app/oracle/logs# less /u01/app/oracle/admin/tlp/udump/tlp_ora_17122.trc
    Node name: hyperion
    Release: 2.6.18.1-vs2.1.1-rc48-vserver
    Version: #1 SMP PREEMPT Wed Nov 15 16:53:41 CET 2006
    Machine: i686
    Instance name: tlp
    Redo thread mounted by this instance: 1
    Oracle process number: 15
    Unix process pid: 17122, image: oracle@hyperion (TNS V1-V3)
    *** SERVICE NAME:() 2007-02-15 14:33:39.169
    *** SESSION ID:(159.7) 2007-02-15 14:33:39.169
    Thread 1 checkpoint: logseq 204, block 3873, scn 6263056
    cache-low rba: logseq 204, block 63015
    on-disk rba: logseq 204, block 66596, scn 6301195
    start recovery at logseq 204, block 63015, scn 0
    ----- Redo read statistics for thread 1 -----
    Read rate (ASYNC): 1790Kb in 0.18s => 9.71 Mb/sec
    Total physical reads: 4096Kb
    Longest record: 20Kb, moves: 0/2889 (0%)
    Change moves: 0/32 (0%), moved: 0Mb
    Longest LWN: 299Kb, moves: 0/503 (0%), moved: 0Mb
    Last redo scn: 0x0000.0060260a (6301194)
    ----- Recovery Hash Table Statistics ---------
    Hash table buckets = 32768
    Longest hash chain = 2
    Average hash chain = 477/472 = 1.0
    Max compares per lookup = 1
    Avg compares per lookup = 8228/8877 = 0.9
    *** 2007-02-15 14:33:39.382
    KCRA: start recovery claims for 477 data blocks
    *** 2007-02-15 14:33:39.398
    KCRA: blocks processed = 477/477, claimed = 477, eliminated = 0
    *** 2007-02-15 14:33:39.403
    Recovery of Online Redo Log: Thread 1 Group 2 Seq 204 Reading mem 0
    ----- Recovery Hash Table Statistics ---------
    Hash table buckets = 32768
    Longest hash chain = 2
    Average hash chain = 477/472 = 1.0
    Max compares per lookup = 2
    Avg compares per lookup = 7619/8700 = 0.9
    Error in executing triggers on database startup
    *** 2007-02-15 14:33:45.811
    ksedmp: internal or fatal error
    ORA-00604: error occurred at recursive SQL level 1
    ORA-12663: Services required by client not available on the server
    ORA-36961: Oracle OLAP is not available.
    ORA-06512: at "SYS.OLAPIHISTORYRETENTION", line 1
    ORA-06512: at line 15
    Any ideas ?
    Thanks
    Blue

    Your basic problem seems to be you have a trigger, which wants to use OLAP functionality, but OLAP is not installed or not available for whatever reason
    Error in executing triggers on database startup <================
    *** 2007-02-15 14:33:45.811
    ksedmp: internal or fatal error
    ORA-00604: error occurred at recursive SQL level 1
    ORA-12663: Services required by client not available on the server
    ORA-36961: Oracle OLAP is not available.
    ORA-06512: at "SYS.OLAPIHISTORYRETENTION", line 1
    ORA-06512: at line 15
    So check your database startup triggers.
    Werner

  • Out of Memory Error in iplanet 6.1

    While starting iplanet 6.1 sp2 in HP-UX 11.00 , out of memory error occurs. Value for JVM heapsize is set to min 128 MB and Max of 512 MB. Tried changing both the values to 512 MB, still the error occurs.
    Please revert with any solution.

    Please find the values of JVM HeapSIze and HP-UX process and address space limits.
    JVM Heap size has been changed to
    Min 128MB
    Max 2GB
    max_thread_proc 2048
    maxdsiz 1073741824
    maxssiz 401604608
    maxtsiz 1073741824
    After the modification of the above changes , out of memory errors occurs.Find the logfile below.
    [21/Feb/2005:09:37:31] failure ( 8528): for host 163.38.174.17 trying to POST /wect/servlets/com.citicorp.treasury.westerneurope.maintenance.PageLinksServlet, service-j2ee reports: StandardWrapperValve[PageLinksServlet]: WEB2792: Servlet.service() for servlet PageLinksServlet threw exception
    javax.servlet.ServletException: WEB2664: Servlet execution threw an exception
    at org.apache.catalina.core.StandardWrapperValve.invokeServletService(StandardWrapperValve.java:793)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:322)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:509)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:212)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:509)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:209)
    at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:509)
    at com.iplanet.ias.web.connector.nsapi.NSAPIProcessor.process(NSAPIProcessor.java:161)
    at com.iplanet.ias.web.WebContainer.service(WebContainer.java:578)
    ----- Root Cause -----
    java.lang.OutOfMemoryError
    [21/Feb/2005:09:45:04] warning ( 8528): HTTP3039: terminating with 6 sessions st
    ill in progress
    [21/Feb/2005:09:45:04] failure ( 8528): CORE3189: Restart functions not called since 6 sessions still active
    Please suggest.

  • Overcoming Out of Memory Error

    hi, i need to initialize a 3 dimensional array at
    [21] [21] [500,000] = 220,500,000 * 8 bytes = 1,764,000,000 = 1.764 Gig
    its currently at 1/100 the size
    [21] [21] [5000] = 2,205,000 * 8 bytes = 17,640,000 = 17 Meg
    and working fine.
    I am getting an Out of Memory error with anything much higher.
    What is the max i can set the OS (windows 2000 or Xp) to accept?
    Thanks Alot!

    haha, thanks guys (or girls). thats what i figured.
    this is actually a graphing program that is reading in
    long strangely formatted files (3,000 lines)
    sorting and arranging them and then plotting them.
    Every plot requires at least 40 columns all with the same length
    (always above 2000).
    Someone persists that 5,000 plot points isnt enough so im
    humoring them and looking in to make it bigger.
    id think about plotting off the files themselves but in XP the
    plotting is ridiculously slow as it is. we're talking 2 or 3 second refreshs
    to load a jpanel with the plot.
    I think ill just let the user reconfigure WHERE the length will be
    as in, enter,
    [x] [y] [z] so you can do [ 1] [40] [ 55,000 ] or whatever
    as opposed to the standard [21][21][5000].
    thanks for your responses!

  • Error with ini files & Memory Error

    Hi All,
    I am attaching my VI which is used to save the control values, I am doing it with configuration file, but sometimes what happens my configuration file got erased completely,and my data got lost otherwise saving control values works perfectly fine,  how can I stop this so my configuration data is safe. or is there any other method to save the control values.
    2ndly, I have another error in a different vi which keeps coming (image is attached as "Memory Error"), I tried installing LV and XP again, changed the PC but problem persists.
    Could someone please try to throw some light on these issues.
    Thanks & Regards
    James
    Attachments:
    ini error.PNG ‏24 KB
    Save control values.vi ‏50 KB
    Error2.JPG ‏1 KB

    Your Error2 image is not viewable. Looks like a corrupt upload.
    Your fundamental problem lies in the first part of your code. If the INI file doesn't exist, then when you open it, a new one gets created. There is no SVALUE key, so you are trying to loop 0 times. Since the first for-loop does not execute, the INI file refnum does not get transferred through the tunnel, and subsequent INI file functions receive a null refnum. You need to either wire the refnum outside the loops or use shift registers for the refnum.
    As to your code: OK...
    You must really like whitespace, because that's the only reason I can see for having such a huge while loop that just sits there.
    You really need to learn to draw straight wires.
    You are jumping through unnecessary hoops. You are taking the control values, bundling them into a cluster, converting the cluster into an array, and then autoindexing the array while at the same time setting the "N" terminal of the for-loop. Whew!
    Skip the bundling. Just make an array directly!
    Do not wire both the "N" terminal of a for-loop and autoindex. Do one or the other. Since you have the array, just auto-index.
    The Index Array function is resizable. You do not need to place 2 of them to get the 2 values.
    Your string constants are in 2 places. You should have them in one place only. If you decide to change the names then you need to remember to change them in 2 places.

  • Out of Memory Error - not a RAM issue

    I'm using Flash 8 and am getting an out of memory error
    simply trying to copy and paste text in Flash. The error tells me
    to allocate more memory to flash by getting info on the app in the
    finder. This seems strange to me because I thought this was an OS9
    function (I'm running OSx 10.4.9). When I get info, there is
    actually a twirl down menu for Memory, but everything is grayed out
    so I can't change it (it's currently set to 512kb). I don't have
    Flash open at the time.
    Does anyone know what might be going on?
    Thanks ahead of time.
    David

    The application is downloaded from a server Tomcat and launched by Java Web start.
    The settings of the JVM are :
    <j2se version="1.6.0.4+" java-vm-args="-Xms20m -Xmx256m -XX:MinHeapFreeRatio=30 -XX:MaxHeapFreeRatio=50" href="http://java.sun.com/products/autodl/j2se" />
    So the heap can increase up to 256 Mbytes, which I think is sufficient for the application.
    Sometimes the test scenario, which sends the same messages periodically without ending, doesn't cause any problem to the application : the amount of heap it uses remains stable. The application seems to be able to run indefinitely (I let it run for a week without stop).
    Sometimes, with the same scenario, after a few runtime hours, the amount of heap used by the application increases suddenly (in a lapse of 2 or 3 minutes). The GC throws an OOM error and the application freezes. The only solution is to re-launch it.
    The behavior of the JVM and the GC is beyond understanding to me.
    Should I change the tuning parameters of the JVM ?
    Would it be a solution to set the Xms and Xmx parameters to the same value, for example 246 MBytes.
    Are there other parameters of the JVM that could improve the running of the application ?

  • Safari behaves strangely after upgrading to Lion

    After upgrading to Lion, Safari is behaving strangely. When re-opening Safari after quitting, all windows from previous session re-opens. It didn't used to happen and also quite annoying. How can that be fixed? I only want my homepage to open, if I individually quit the windows by the red x-button, and then quite Safari normally, it opens up as priviously. Any ideas?
    thanks

    It is one of the over 250 new features of the World's Most Advanced Operating System called Resume.
    If this is all that's bothering you just close all windows before quitting Safari or hold Shift when you launch Safari
    If the whole resume thing is overbearing then...
    Also in System Preferences > General there is a hard-to-find checkbox under "Number of recent items" you can turn off.
    Also, you can hold the shift key to disable resume on a one time basis.
    If you want to turn it off on a per app basis, (TextEdit is by example, replace TextEdit with the name of the app)
    Launch Terminal and copy/paste this at the prompt...
    defaults write com.apple.TextEdit NSQuitAlwaysKeepsWindows -bool false
    Press return.
    YOu can aslo accomplish this thriugh the GUI by going to ~/Library/Saved Application State/TextEdit and delete that file/
    To turn off Resume globally...
    chflags uchg ~/Library/"Saved Application State"
    Press return
    The reverse of the first one is to replace false with true.
    The reverse of the second one is
    chflags nouchg ~/Library/"Saved Application State"
    Again, you can accomplish this through Finder by going to ~/Library/Saved Application State and deleting the folder.

  • Ssh to Mac: periodic "out of memory" errors abort connection at login

    I have a frequent, but not constant problem...
    when I ssh to my Mac at home, from Linux boxes from work, and/or from my PC laptop (using putty), I often have to attempt to connect several times. The failing attempts ask for my password and seem to complete the login process.
    But at the 1st command I enter at the shell prompt, I see the message
    tcsh: out of memory
    very briefly, and then the window closes itself, and poof no more ssh connection...
    This seems to go in spurts, then one of the windows will be fine.
    For a while, it seemed that doing a quick ls as my first command worked around the problem, but I'm finding that not to be the case while on this trip (with my laptop and putty).
    I've done a pretty wide search and am coming up with very little on this.
    Any ideas?
    Again, sometimes it works, and ssh operates as expected. other time I can get several OOM errors in a row, before finally getting a working session. This is a not a new problem, has happened for at least a year, but seems to be worse lately.
    Thanks,
    Mike
    Mini Duo   Mac OS X (10.4.7)  

    Thanks, I'll give this a shot when I back in town this weekend...
    in the meantime, I've played around some more.. commented out most of my .login/.cshrc etc, to see if there were any culprits..
    over the ssh connection, if I try to change shells I can see the out of memory error sometimes, there, and the shell launch fails kicking back into the parent shell. When there is no "parent" shell, (original ssh login), the failure has no where to fall back to and the session terminates.
    I do not recall ever seeing this from Terminal (at my mac itself).
    So I guess that begs the question.. what is different about getting to a shell prompt from a remote ssh session vs. being there in person with Terminal?
    Mike

  • Extractor 1_co_pa_* behaving strangely

    Hi,
    I am facing a very strange problem
    Extractor 1_CO_PA_* has been enhanced in CMOD, and a query is written which is now behaving strangely.
    CLEAR : w_tabix1.
    t_copa[] = c_t_data[].
    SELECT vbeln matnr pstyv vertn posnr kvgr2 kvgr1 fbuda abrbg netwr fkimg
                  INTO TABLE it_vbrp
                  FROM vbrp
                  FOR ALL ENTRIES IN t_copa
                  WHERE vbeln = t_copa-rbeln
                    AND matnr = t_copa-artnr.
    *                AND pstyv IN ('ZB2S','ZB23','ZB2M','ZB2Y','ZD2S','ZD23'
    *'ZD2M','ZD2Y','ZI2S','ZI23','ZI2M','ZI2Y').
    we have not moved any changes to production.  yet out of blue, only for 48000 records in t_copa table its fetching 17million entries, thus giving memory dump.
    thanks.

    Hi,
    Please give more details, so we can help.

  • A FIX for error message: When I try to open Snood (it's a game) I get this message.  Not enough memory {Error # :: 0, in sound.cp@line 101  Can you help?

    After years of playing Snood, w/o problems, I started getting this error message, on my iMac, OS 10.5.8,
    with 4 GB of memory when opening Snood:  Not enough memory {Error # :: 0, in sound.cp@line 101
    My MacBook Pro w. Mac OS 10.6.8 did not have this problem.
    Initially I thought that Snood raised its minimum requirement to Mac OS 10.6.
    I had several correspondences with Snood. Their tech support is great. Quick and thorough responses.
    They thought the issue was in Mac's system preferences/ Sound. It was.
    I didn't realize that my sound input and output devices were gone.
    The fix was resetting the PRAM. I found this advice on MacFixIt.com.
    MacFixIt help with volume:   http://reviews.cnet.com/8301-13727_7-10415659-263.html
    Resetting the PRAM is on Apple support:   http://support.apple.com/kb/HT1379
    My sound (music!) is back, along with Snood. So glad I reset the PRAM before reinstalling the OS software!
    Thank you to Snood, MacFixIt and Apple.
    Happy new year all!

    Good work, nice post/tip, thanks!

  • Acrobat XI Pro "Out of Memory" Error.

    We just received a new Dell T7600 workstation (Win7, 64-bit, 64GB RAM, 8TB disk storage, and more processors than you can shake a stick at). We installed the Adobe CS6 Master Collection which had Acrobat Pro X. Each time we open a PDF of size greater than roughly 4MB, the program returns an "out of memory" error. After running updates, uninstalling and reinstalling (several times), I bought a copy of Acrobat XI Pro hoping this would solve the problem. Same problem still exists upon opening the larger PDFs. Our business depends on opening very large PDF files and we've paid for the Master Collection, so I'd rather not use an freeware PDF reader. Any help, thoughts, and/or suggestions are greatly appreciated.
    Regards,
    Chris

    As mentioned, the TEMP folder is typically the problem. MS limits the size of this folder and you have 2 choices: 1. empty it or 2. increase the size limit. I am not positive this is the issue, but it does crop up at times. It does not matter how big your harddrive is, it is a matter of the amount of space that MS has allocated for virtual memory. I am surprised that there is an issue with 64GB of RAM, but MS is real good at letting you know you can't have it all for use because you might want to open up something else. That is why a lot of big packages turn off some of the limits of Windows or use Linux.

Maybe you are looking for