Replication with in memory DB: client synchronization

Hi,
I'm using replication framework with two completely in-memory databases. The first one is launched as master without knowledge of its replica db ("dbenv->repmgr_add_remote_site" and "dbenv->repmgr_set_nsites" are not called), some data is inserted into it and subsequently replica process is launched as client (in this case "repmgr_add_remote_site" and "repmgr_set_nsites" are called with master coordinates). I expected client to be synchronized by master with previously inserted records, but this doesn't seem to happen. Futhermore, although client opens db successfully, when db->get is called on the client the following error is returned:
"DB->get: method not permitted before handle's open method".
These are the first messages printed by master when client process is started:
MASTER: accepted a new connection
MASTER: got handshake 10.100.20.106:5066, pri 1
MASTER: handshake introduces unknown site
MASTER: EID 0 is assigned for site 10.100.20.106:5066
MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 0 eid 0, type newclien
t, LSN [0][0] nogroup
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid -1, type newsite, L
SN [0][0] nobuf
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid -1, type newmaster,
LSN [1][134829] nobuf
MASTER: NEWSITE info from site 10.100.20.106:5066 was already known
MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 0 eid 0, type master_r
eq, LSN [0][0] nogroup
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid -1, type newmaster,
LSN [1][134829] nobuf
MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type update_r
eq, LSN [0][0]
MASTER: Walk_dir: Getting info for dir: ./env
MASTER: Walk_dir: Dir ./env has 2 files
MASTER: Walk_dir: File 0 name: __db.rep.gen
MASTER: Walk_dir: File 1 name: __db.rep.egen
MASTER: Walk_dir: Getting info for in-memory named files
MASTER: Walk_dir: Dir INMEM has 1 files
MASTER: Walk_dir: File 0 name: RgeoDB
MASTER: Walk_dir: File 0 (of 1) RgeoDB at 0x41ee2018: pgsize 65536, max_pgno 1
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type update, LSN
[1][134829] nobuf
MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type page_req
, LSN [0][0]
MASTER: page_req: file 0 page 0 to 1
MASTER: page_req: found 0 in dbreg
MASTER: sendpages: file 0 page 0 to 1
MASTER: sendpages: 0, page lsn [1][218]
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LSN [
1][134829] nobuf resend
MASTER: wrote only 13032 bytes to site 10.100.20.106:5066
MASTER: sendpages: 0, lsn [1][134829]
MASTER: sendpages: 1, page lsn [1][134585]
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LSN [
1][134829] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: sendpages: 1, lsn [1][134829]
MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log_req,
LSN [1][28]
MASTER: [1][28]: LOG_REQ max lsn: [1][134829]
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][28] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][131549] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][131633] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][131797] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][131877] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][131961] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][132125] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][132205] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: queue limit exceeded
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][132289] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: queue limit exceeded
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][132453] nobuf resend
MASTER: msg to site 10.100.20.106:5066 to be queued
MASTER: queue limit exceeded
MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
][132533] nobuf resend
And these are the corresponding messages printed by client process after startup:
REP_UNDEF: rep_start: Found old version log 13
CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type newclient,
LSN [0][0] nogroup nobuf
Slave becomes slave
Replication service started
CLIENT: starting election thread
CLIENT: elect thread to do: 0
CLIENT: repmgr elect: opcode 0, finished 0, master -2
CLIENT: init connection to site 10.100.20.105:5066 with result 115
CLIENT: got handshake 10.100.20.105:5066, pri 1
CLIENT: handshake from connection to 10.100.20.105:5066
CLIENT: handshake with no known master to wake election thread
CLIENT: reusing existing elect thread
CLIENT: repmgr elect: opcode 3, finished 0, master -2
CLIENT: elect thread to do: 3
CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type newclient,
LSN [0][0] nogroup nobuf
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type newsite,
LSN [0][0]
CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type master_req
, LSN [0][0] nogroup nobuf
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type newmaste
r, LSN [1][134829]
CLIENT: repmgr elect: opcode 0, finished 0, master -2
CLIENT: Election done; egen 6
CLIENT: Updating gen from 0 to 5 from master 0
CLIENT: Egen: 6. RepVersion 4
CLIENT: No commit or ckp found. Truncate log.
CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type update_req,
LSN [0][0] nobuf
New Master elected
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type newmaste
r, LSN [1][134829]
CLIENT: Election done; egen 6
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type update,
LSN [1][134829]
CLIENT: Update setup for 1 files.
CLIENT: Update setup: First LSN [1][28].
CLIENT: Update setup: Last LSN [1][134829]
CLIENT: Walk_dir: Getting info for dir: ./env
CLIENT: Walk_dir: Dir ./env has 5 files
CLIENT: Walk_dir: File 0 name: __db.rep.gen
CLIENT: Walk_dir: File 1 name: __db.rep.egen
CLIENT: Walk_dir: File 2 name: __db.rep.init
CLIENT: Walk_dir: File 3 name: __db.rep.db
CLIENT: Walk_dir: File 4 name: __db.reppg.db
CLIENT: Walk_dir: Getting info for in-memory named files
CLIENT: Walk_dir: Dir INMEM has 0 files
CLIENT: Next file 0: pgsize 65536, maxpg 1
CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type page_req, L
SN [0][0] any nobuf
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LS
N [1][134829] resend
CLIENT: PAGE: Received page 0 from file 0
CLIENT: PAGE: Write page 0 into mpool
CLIENT: PAGE_GAP: pgno 0, max_pg 1 ready 0, waiting 0 max_wait 0
CLIENT: FILEDONE: have 1 pages. Need 2.
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LS
N [1][134829] resend
CLIENT: PAGE: Received page 1 from file 0
CLIENT: PAGE: Write page 1 into mpool
CLIENT: PAGE_GAP: pgno 1, max_pg 1 ready 1, waiting 0 max_wait 0
CLIENT: FILEDONE: have 2 pages. Need 2.
CLIENT: NEXTFILE: have 1 files. RECOVER_LOG now
CLIENT: NEXTFILE: LOG_REQ from LSN [1][28] to [1][134829]
CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log_req, LS
N [1][28] any nobuf
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
[1][28] resend
CLIENT: rep_apply: Set apply_th 1
CLIENT: rep_apply: Decrement apply_th 0
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
[1][64] resend
CLIENT: rep_apply: Set apply_th 1
CLIENT: rep_apply: Decrement apply_th 0
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
[1][147] resend
CLIENT: rep_apply: Set apply_th 1
CLIENT: rep_apply: Decrement apply_th 0
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
[1][218] resend
CLIENT: rep_apply: Set apply_th 1
CLIENT: rep_apply: Decrement apply_th 0
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
[1][65802] resend
CLIENT: rep_apply: Set apply_th 1
CLIENT: rep_apply: Decrement apply_th 0
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
[1][131386] resend
CLIENT: rep_apply: Set apply_th 1
CLIENT: rep_apply: Decrement apply_th 0
CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
[1][131469] resend
It seems like there are repeated messages from master, but I'm not able to understand what's wrong.
Thanks for any kind of help
Marco

The client requests copies of the database pages from the master by sending the PAGE_REQ message. The master responds by sending a message for each page (i.e., many PAGE messages). The master tries to send PAGE messages as fast as it can, subject only to the throttling configured by rep_set_limit (default 10Meg).
With 64K page size, the master's local TCP buffer fills up immediately, and repmgr only stores a backlog of 10 additional messages before starting to drop messages. The replication protocol is designed to tolerate missing messages: if you were to let this run, and continue to commit new update transactions at the master at a modest rate, I would expect this to complete eventually.
However, repmgr could clearly do better at managing the traffic to avoid this situation, at least in cases where the client is accepting input at a reasonable rate. I am currently working on a fix/enhancement to repmgr which should accomplish this. (This same problem was reported by another user a few days ago.)
In the meantime, you may be able to work around this problem by setting a low throttling limit. With your 64K page size, I would try something in the 320,000 to 640,000 range.
Alan Bram
Oracle

Similar Messages

  • 3rd party distributed SW load balancing with In-Memory Replication

              Hi,
              Could someone please comment on the feasibility of the following setup?
              I've started testing replication with a software load balancing product. This
              product lets all nodes receive all packets and uses a kernel-level filter
              to let only one node at the time receive it. Since there's minimum 1 heartbeat
              between the nodes, there are several NICs in each node.
              At the moment it seems like it doesn't work: - I use the SessionServlet - with
              a 2-node cluster I first have the 2 nodes up and I access it with a single client:
              .the LB is configured to be sticky wrt. source IP address, so the same node gets
              all the traffic - when I stop the node receiving the traffic the other node takes
              over (I changed the colours of SessionServlet) . however, the counter restarts
              at zero
              From what I read of the in-memory replication documentation I thought that it
              might work also with a distributed software load balancing cluster. Any comments
              on the feasability of this?
              Is there a way to debug replication (in WLS6SP1)? I don't see any replication
              messages in the logs, so I'm not even sure that it works at all. - I do get a
              message about "Clustering Services startting" when I start the examples server
              on each node - is there anything tto look for in the console to make sure that
              things are working? - the evaluation license for WLS6SP1 on NT seems to support
              In-Memory Replication and Cluster. However, I've also seen a Cluster-II somewhere:
              is that needed?
              Thanks for your attention!
              Regards, Frank Olsen
              

    We are considering Resonate as one of the software load balancer. We haven't certified
              them yet. I have no idea how long its going to take.
              As a base rule if the SWLB can do the load balancing and maintain stickyness that is fine
              with us as long as it doesn't modify the cookie or the URL if URL rewriting is enabled.
              Having said that if you run into problems we won't be able to support you since it is not
              certified.
              -- Prasad
              Frank Olsen wrote:
              > Prasad Peddada <[email protected]> wrote:
              > >Frank Olsen wrote:
              > >
              > >> Hi,
              > >>
              > > We don't support any 3rd party software load balancers.
              >
              > Does that mean that there are technical reasones why it won't work, or just that
              > you haven't tested it?
              >
              > > As >I said before I am thinking your configuration is >incorrect if n-memory
              > replication is not working. I would >strongly suggest you look at webapp deployment
              > descriptor and >then the config.xml file.
              >
              > OK.
              >
              > >Also doing sticky based on source ip address is not good. You >should do it based
              > on passive cookie persistence or active >cookie persistence (with cookie insert,
              > a new one).
              > >
              >
              > I agree that various source-based sticky options (IP, port; network) are not the
              > best solution. In our current implementation we can't do this because the SW load
              > balancer is based on filtering IP packets on the driver level.
              >
              > Currently I'm more interested in understanding whether it can our SW load balancer
              > can work with your replication at all?
              >
              > What makes me think that it could work is that in WLS6.0 a session failed over
              > to any cluster node can recover the replicated session.
              >
              > Can there be a problem with the cookies?
              > - are the P/S for replication put in the cookie by the node itself or by the proxy/HW
              > load balancer?
              >
              > >
              > >The options are -Dweblogic.debug.DebugReplication=true and
              > >-Dweblogic.debug.DebugReplicationDetails=true
              > >
              >
              > Great, thanks!
              >
              > Regards,
              > Frank Olsen
              

  • Setting up replication with client written in C++ and DB master in Java

    Hi
    I am trying to write C++ client that will join replicated environment created by a master written in java. For JAVA, I am using Berkley DB Java 4.0.103 API.
    So far, I have written Master in java which creates a replicated environment. I also have a client written in JAVA that joins this replication environment and synchronizes all the data from master. Now, I want to convert this client written in JAVA to C++ (as our main application that will be using this is written in C++) and wondering if it will have any problems in serializing java objects to C++ objects.
    Anybody here have done something like this? I wanted to make sure if this is possible before I try it out. Any pointers in right direction will be helpful.....
    Thanks,
    -Chirag

    It is important to understand that BDB JE and BDB (C based edition) are two different products, not just two different APIs. In general you should normally choose one or the other. BDB JE is most appropriate for pure Java apps.
    BDB has APIs in many languages, including Java, C and C++. BDB JE, which is pure Java, has only a Java API.
    If you want to avoid a dependency on Java on the client (i.e., you want to use only C and C-based libraries), and you want to use a replication group that includes the client and server, then your only option is to use BDB (not BDB JE) on both client and server. On the server, you can use the Java API for BDB (this is not BDB JE), and use the C/C++ API for BDB on the client. With this option, you can make your data portable by using tuple bindings and writing C/C++ equivalents, as described in the other thread that you referenced.
    If you want to write your client app in C/C++, but use BDB JE on both client and server, it is possible to use the JNI invocation APIs on the client to make calls to BDB JE, as Linda mentioned. In this case, you don't need to worry about data portability, since you can use the same bindings (e.g., Java tuple bindings) on both client and server. However, your client app will depend on Java, and I think you'll find that using JNI invocation in this manner will be unwieldy.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • I have windows 7 and my firefox client will not open, but it shows in task manager with 108k memory. Whats going on with my firefox?

    As stated on the question I have windows 7 and meet all the requirements. But when i launch firefox It wont start, but it shows in task manager with 108k memory.

    Got the exact same problem aswell, finally fed up with it now as i just started up firefox and 14 new windows opened because of this bug, luckily my computer can handle them but someone with a slower processor it would have been a nightmare, needs fixing ASAP.
    Reverting back to 3.6.3 until this issue is solved. (link for anyone wanting to do the same below)
    http://www.filehippo.com/download_firefox/7345/

  • Possible deadlocks with in-memory database using Java

    I've written a completely in-memory database using the Java API on BDB 4.6 and 4.7 for Windows and Linux (x86). The completely in-memory database means the database content and logs are entirely in-memory and the overflow pages will not be written to a disk file.
    The database environment and the database are configured to be transactional. All database access methods are specified to be auto-commit by setting the transaction argument to null. The environment is configured to be multi-threaded (which is the default when using the Java API).
    When run with a single-threaded client, the application works correctly on both Windows and Linux for BDB 4.6 and 4.7.
    When run with a multi-thread client that uses two thread for the database access, I run into a deadlock inside the call to the Database.delete method about half the time.
    I am assuming that in the "auto-commit" mode, a deadlock should not be possible.
    Any reported problems with using Java with in-memory database?
    Thanks.
    Hisur

    Hi Hisur,
    If you are using transactions and multiple threads, you will have to deal with deadlock. In this particular case, it's likely that a delete is causing two btree pages to be merged (called a "reverse split"). Auto-commit makes no difference in this case -- the application must retry the operation.
    Regards,
    Michael Cahill, Oracle.

  • JDK errors in Multiprocessor with weak memory model

    When reading this article to understand some of the Double-check locking problems :
    I got surprised by point 4. Appart the String example, it means that any build immutable object like build during a single-thread phase have no garanty later in multi-thread phase to be read a in fully initialized state.
    So I checked implentation of some JDK classes to how is managed some static field. The first I checked (in JDK 1.4.1) is Locale.getDefaultLocale().
    LOL in fact it's a case of double-check locking.
    The internal comment is:
    // do not synchronize this method - see 4071298
    // it's OK if more than one default locale happens to be created
    What is 4071298?
    Secondly, setting two times the default locale isn't the problem with Double-check locking.
    So any explanation this is ok? Or it's just a JDK example of wrong Double-check locking?

    When talking about languages that run in a virtual machine, concern yourself only with the virtual machine, because that is where your code will be executed.
    When talking about code that runs in a physical machine, concern yourself with the physical machine (CPU/RAM/etc.) because that is where your code will be executed.
    In summary, concern yourself with the memory model of the machine that will run your code. Which is the Java VM in this case.

  • Directory Server with huge memory - performance issue

    Hi.
    We're using Sun Java System Directory Server 5.2Patch6 on Sun T5240
    (UltraSPARC T2) with 64GB memory. Setting DBCache + EntryCache > 20G,
    some performance degradation occurred.
    (A) Sometimes, ping(ICMP echo/reply) reply time was delayed
    (from same network segments, it occasionally becomes response time > 100ms
    though time is almost response time < 1ms usually).
    (B) Small freeze (OS? SDS?) occrred. LDAP search(from other clients)
    response time was delayed(access log's etime was <1, but in my experience,
    it was delayed).
    I've read follwing article.
    http://wikis.sun.com/display/SunJavaSystem/Directory+Server+Cache+Sizing
    In this article, suggested pagesize tuning. But using pmap command,
    ns-slapd was already using 4M page, and using "ppgsz -o heap=256M",
    ns-slapd still using 4M page.
    So, I have 2 questions.
    (1) How to change pagesize using ns-slapd? I want to use 256M pagesize
    (supported by Solaris10 / T5240)
    (2) Is the above-mentioned problems, due to pagesize?
    Regards,
    Nokamoto

    Setting DBCache + EntryCache > 20G, some performance degradation occurred.Based on real-world observations, memory consumption of directory server can be upto thrice the configured entry cache size plus db cache size. So we can expect your process size to be inching toward your physical memory limit.This would in turn most likely cause a performance degradation as other processes also need RAM and the OS would start paging to disk.
    Do you need to increase cache to 20GB?

  • Will OS X 10.4 Server work with a 10.7 client?

    My client has a four Mac network and one of those is the Server, Running OS X Server 10.4.11. This machine is very solid and there's no reason at all to do anything other than add a bit of memory to it (it is an Aluminum MacPro G5). We need to upgrade two of the client machines to Lion for software reasons. So the question is: will 10.4 server have any problems with a 10.7 client and vice-versa?
    thanks!

    Hi
    If all you're talking about is simple file-sharing, no. You might struggle if you want to use LDAP although you won't really know until you try?
    Just to clarify but there's no such thing as an Aluminium MacPro G5. A MacPro is Intel Processor only and succeeded the PowerMac G5 models that were available previously to the Intel take-up. Superficially they looked similar but are completely different once you look inside.
    Mactracker is a useful database on most things Apple which you can download for free:
    http://mactracker.ca/
    HTH?
    Tony

  • How to upload data from  flat to ztables with in the same client by idocs

    Hi Experts,
                   I have a requirement in IDOCS, I need to create a custom IDOC .I  am working on IDES 4.6c. The reqirement is , there  are ztables with header and item data. say for example Authors and Books. I need to upload data from flat file which is available in presentation  server of the same client, which will updated in ztables by using idocs.  For this i need to do ale settings also. The client is 800. There is no other client available. With in the same client i need to do the above stuff.
                    For this requirement how to approach (step by step) to accomplish.
    Thanks in Advance.
    Regards
    J.S.Varma

    Hi,
      This is the procedure.
    create segments using we31. <b>don't forget to relaese it</b>
    create idoc using above segments using we30 <b>don't forget to relaese it</b>
    create message type using we81
    create function module to upload data using se37
    maintain process code using we42
    create  partner profiles we20.
    In the fm module itself write the code for downloading the data from presentation server by GUI_DOWNLOAD.
      Then update the database tables directly by insrt through other internal table in the same client itself.
    Thanks
    Manju

  • Memory issues - kernel panics / system freezes with 1Gb Memory Card

    I've recently updated my PowerBook OS to 10.4.7 and now more recently 10.4.8.
    Prior to this, i had been running a pair of pc2700 memory cards (1x1Gb + 1x 512Mb) bought from crucial technology on 10.3.9 with no problems.
    During the upgrade, i experienced complete system lock ups and kernel panics. pulling the 1Gb card resolved the problem.
    So i thought it could be duff memory (it happens). I ran hardware checker, both long and short tests, ran memchecker from single user and there was no reported errors although i was still getting crashes.
    I returned the memory to crucial and they replaced it and then i went to reinstall memory in the pb.
    Once reinstalled, problems re-emerged. Both System profiler and iStat pro show both memory sticks as ok. both are pc2700 DDR.
    i tried just the 1 Gb stick in alternate memory slots. with the memory in the lower slot i was getting system freezes, in the top slot i was getting kernel panics.
    I've pulled the 1Gb and reverted to 768Mb (512Mb lower (crucial)/ 256MB (apple) upper) and i'm back to normal.
    Now what?
    As my mac is getting on, i'd like to get as much RAM in it as i can, ideally the 1.5Gb /2Gb which it can support. I'd be suprised if it could be the memory stick being duff (2 duff in a row?), its not a faulty memory slot as system profiler is showing that its full.
    Any suggestions? Anything to do with how 10.4.8 handles the memory addressing?
    Any help would be appreciated.
    PowerBook G4 1Ghz FW800   Mac OS X (10.4.8)  

    I've had that problem, too. I found out that it was one of my memory modules that were broken.
    I'd say pop back in your other memory module, then get out your PowerBook DVD and stick it in Restart your PowerBook while holding down the Option Key on your keyboard until some buttons start to show up on your screen.
    Select the Hardware Test one, then click the Arrow button. It'll boot into a utility and you can run a hardware test that includes checking your memory.
    If any of your memory is bad, it'll show up like "**ERROR ERROR**" and give you a code.
    After you've identified the hardware problem (hopefully just the RAM), then you can go to Crucial's website and fill out an RMA request to have the memory replaced. I'm pretty sure Crucial has a life-time warranty on their memory products.

  • IMac 27" (2013) won't work with Axiom Memory

    I purchased an iMac earlier this year from MacMall and although I intended to purchase my own memory, I fell victim to their "sweet deals" and opted for them to upgrade the memory. They replaced the stock 8gb memory with a 32gb kit.
    I had problems as soon as it arrived and unfortunately delayed contacting them about support for it until I was past the 30 day warranty.
    My original issue was that the computer recognized all 32gb but every hour or so it would randomly crash and reboot with no warning. I ran memory tests against the ram and it always came back saying it was working. I took the memory out and found they used the correct memory as far as the label shows and it is low voltage. I looked for solutions online and was unable to solve this issue. I contacted MacMall and of course they said to go to Apple or the manufacturer. I did find some others with the same issue but I figured by best bet was the manufacturer.
    I reached out to Axiom and surprisingly their support is great and respond quickly. They immediately said they would ship out replacement memory. Which is great though it was pretty easy to get to this point so I assume they know of the issue. Once I received it I connected the 32gb they shipped out into the 4 banks on my iMac. It would not start up now. This is an even worse issue. I tried to use different slots and then I tried just two memory sticks in the top two slots which worked for the stock memory. No matter what I did it would not start up. I put the original memory back in and it worked fine.
    I reached out to them and they sent another set. Itried this set and the same issue. It would not start up. I put my original memory back in and it starts fine and everything is working great.
    Now my next steps are to wait for Axiom on another possible solution and to go to the Apple Store to see if they can test a different brand. When I originally spoke to someone at Apple they told me I should have purchased Crucial or Corsair. While I don't disagree I do not want to spend over $300 again for memory.
    Anyone else have any ideas? I was thinking that perhaps there is a way to reset hardware with a command when trying to start your computer that is used to troubleshoot some issues but not sure if that could possibly be related here.

    Well I went through the processes and wasted a ton of time. I have determined this iMac is just garbage. I ended up working with Axiom and they sent me three sets of memoryt replacements. All of them had the same result where once the RAM was installed, the iMac would not turn on. When I switched back to the original memory I noticed that it wouldn't start the first time and when I swapped memory slots it would work after that.
    Axiom finally decided to refund me for the 32gb memory purchase after exhausting all of their support options. At this point, I went to Crucial since when I spoke with Apple support they confirmed that it could be the brand and I should opt for Crucial or a couple of other brands. I went ahead and ordered 16gb of memory from Crucial - I figured I would save some cash for now to test and confirm 16gb works.
    I connected the two 8gb sticks they sent and I had the same results as the Axiom memory. I tried swapping to different slots but it refused to turn on with this memory. However, when I reverted back to the original Apple memory it would not start up with the original 8gb. I eventually was able to get the computer to start up but only when I connected a single 4gb stick of the original memory (which is what I am using to type up this post now).
    I will try to turn it off again and connect the other original memory so at least I am at a useable 8gb. I am going to reach out to Apple support again and see if I can schedule to bring it in. I am not sure what sort of fix there would be and I looked into any type of firmwares available and possible resets (like PRAM) but I don't believe anything could help this. It is just very strange there could be a hardware issue that works with the original memory but nothing from any other vendors.

  • Msi Big Bang Xpower II can only run with 3 memory modules?

    Hello there! I've just got my new system up and running. But it cant boot with 4 memory modules when there is a module in slot 7? I have a 4x8gb kit from corsair (CMP32GX3M4X1600C10), and a Msi Big Bang Xpower II..
    When i start it, it just shows me the debug code "67" which is "Late Cpu Initialization" according to the "User Guide".. It stays there, for about 15 seconds, then it shuts down, and does the same, over and over..
    Atm, i'm running with 3 modules, and there's no problems at all! I've testet ALL memory blocks, in another computer, and they run just fine..
    What should i do?

    Quote from: xmad on 25-September-12, 21:09:52
    Also, what cpu, bios version , memory type etc
    Make sure the mem mods are in the proper slots for tri channel operation.
    >>Posting Guide<<
    If everything comes up clean, update to the most recent bios. Only do this if your computer is stable in bios. IE You are only having crashes in windows etc.
    >>Use the MSI HQ Forum USB flasher<<
    http://www.msi.com/product/mb/Big-Bang-XPower-II.html#/?div=BIOS
    **Merged
    Core i7 3930K, V1.2 Bios, 32GB Corsair Dominator Quad channel 1600Mhz Kit.. The memory are installed as the manual says..

  • I want to ask something about firefox. why firefox use very much memory? can you develop to reduce memory comsume? this problem is very distrub in my PC with low memory.

    I want to ask something about firefox.
    why firefox use very much memory?
    can you develop to reduce memory comsume?
    this problem is very distrub in my PC with low memory.
    == This happened ==
    Every time Firefox opened

    How much memory is Firefox using right now?
    # Press '''CTRL+SHIFT+ESC''' to load the Task Manager window
    # Click the Processes tab at the top. (Click once near the top of the window if you don't see tab
    # Find firefox.exe, and see how many kilobytes of memory it's using.
    Showing around 80MB when Firefox first starts is normal. Right now, I have 75 tabs open and it's using 500MB - this varies a lot depending on what you have in the tabs.
    Other than high memory usage, what other problems are you experiencing? (Examples include slowness, high CPU usage, and failure to load certain sites)
    Many of these issues, including high memory usage, can be caused by misbehaving add-ons. To see if this is the case, try the steps at [[Troubleshooting extensions and themes]]. Outdated plugins are another cause of this issue - you can check for this at http://www.mozilla.com/plugincheck

  • I have an account for apple but this is not acceptable for i cloud. It say it is correct ID and password but this is not icloud account. So my phone can not connect with my computer and not synchronization too

    I have an account for apple but this is not acceptable for i cloud. It say it is correct ID and password but this is not icloud account. So my phone can not connect with my computer and not synchronization too. Last a few months i have not use this phone. Just i start to use again. So most probably i gave my old mail address as a ID or password. So how can i clearing this subject. regards

    ErolSinan wrote:
    ... there is no button for update between the About and Usage buttons in the General. ...
    Correct. That is only a feature of iOS 5 or later...
    ErolSinan wrote:
    ... yes my phone is 3G.
    then it can only go as far as iOS 4.2.1

  • If I buy an ipad 2 with more memory, can i transfer everything from my old ipad to the new one

    If I purchase an ipad 2  with more memory, can I transfer all the contents of the old ipad to the new one?

    You can backup your current iPad, and then restore the new iPad from that backup (a list of what is included in a backup is in this article http://support.apple.com/kb/HT4079 - it excludes music, videos, synced photos). As the backup doesn't contain the actual apps, just their settings and content, for the restore to work completely you'll need all the relevant apps in your computer's iTunes library - otherwise the restore won't be able to install the apps and therefore their content (if you don't have the apps on your computer then you can re-download them for free : http://support.apple.com/kb/HT2519)
    Restoring onto a different device won't restore passwords so you'll need to enter your router and email passwords and any passwords stored on websites.

Maybe you are looking for