Set up replica on OS X 10.8.5

Good afternoon. Today tried to add a domain controller and against reserves as a result of this error I get this:
admin $ sudo / usr / sbin / slapconfig-preflightreplica master.local diradmin
master.local Password:
2013-10-11 11:43:58 +0000 NSMutableDictionary * _getRootDSE (const char *): rootDSE not found
2013-10-11 11:43:58 +0000 Error: Unable to determine the master's software version.
Tell me how can it be overcome?

Is the master OD server the same version of OSX as your replica server? If it isn't, you will have an error saying it is imcompatible with your version of MacOS.
Since your getting the 'Error: Unable to determine the master's software version.' that may also be the case but with another incompatible version of MacOS or the Server software in general.
Double check what your Master and soon to be Replicas versions are.

Similar Messages

  • When replicating customer from ERP, the address is not replicated to CRM.

    Hello.
    I'm facing the problem which the address data of customers is not replicated to CRM.
    I've checked the business object BUPA_MAIN on t-cd SMW01, so I found that the task of addresses is "S" and it doesn't get GUID.
    I'm not sure, but I doubt the error is caused by creating a new client in CRM.
    In other words, I doubt that the setting of replicating the address data of customer is not active.
    Does anyone have the knowledge of this problem?
    BestRegards.
    Miki

    Hi Hedy,
    Please go through this following note
    Note 1511835 - Incomplete addr sent to CRM causing failed duplicate check
    Hope it answers your Query
    All the Best
    Regards,
    Srikanth.Naga

  • Read Only SQL 2012 Replica DB on a SQL 2014 box

    I have set up a data availability group between two SQl 2012 boxes on my windows cluster. they are failing over fine and synchronized as expected.
    I want to add an additional read only replica on my SQL 2014 box which is also included in my windows cluster (so i can use this replica for reporting with 2014 In Memory features) the fail-over is still between the two 2012 boxes. 
    Problem is even tho i have set this replica to read only - yes, the database is always in - Synchronized / In recovery - so i can never connect to it.
    Is this scenario possible?

    Thanks David
    Yes the whole point of this was to get access to real time data from within 2014, but the more i think about it the linked server still would not give me what i want as I belive the in memory is defined at table creation level.
    With the tables being created in 2012 that would be a problem.
    Thanks for the info mate.

  • Multiple RO processes accessing replicated client?

    Hi. I have a question about using replication. (Under Linux if it makes any difference).
    Is it possible to have multiple processes sharing a set of replicated db files? I would only have one process updating the environment, and only to the extent of running the replication manager
    to get updates from the master; we would never have the clients write anything out. The other processes would only open the files read only. The docs are a bit unclear on the subject and as to whether this is a supported configuration.

    Hey Matthew,
    That configuration is supported. The only constraint on multi-process access relates to processing incoming messages (which the replication manager takes care of). As long as only one process calls DB_ENV->repmgr_start, other processes can open the environment for read-only access.
    Michael.
    P.S. For future reference, we've got a separate forum for Berkeley DB HA:
    Berkeley DB High Availability (Replication)

  • Storage Replica versus Robocopy: Fight!

    Storage Replica versus Robocopy: Fight!I've used Robocopy for so many years, that this blog post really caught my eye. Surely, Robocopy could not be beaten doing file copies? oh dear, it looks as though we have a new Sherriff in town.This copy tests both systemsunder various workloads:[originally postedby Ned Pyle]Hi folks, Ned here again. While we designed Storage Replica in Windows Server 2016 for synchronous, zero data-loss protection, it also offers a tantalizing option: extreme data mover. Today I compare Storage Replica’s performance with Robocopy and demonstrate using SR for more than just disaster planning peace of mind.Oh, and you may accidentally learn about Perfmon data collector sets.In this cornerRobocopy has been around for decades and is certainly the most advanced file copy utility shipped in Windows. Unlike the...
    This topic first appeared in the Spiceworks Community

    Hi, 
    Since you have deleted the existing Replication Group at step 5, the step6 could not affect the existing DFSR Database. 
    When you create a new replication group, it will do an initial sync between shares UsersA-C on server1 and a new SAN mounted drive on the replica site server.
    After the step 9, I think two replication groups is needed between server1 and replica site server to replication shares UsersA-C and shares UsersD-F. You could set the replica site server as primary member in the replication group. It will be considered to
    be the authoritative member and it wins out during the initial replication. This will overwrite the current replicated folder content on the non-primary member. 
    You could try a command to set another server as primary:
    Dfsradmin Membership Set /RGName:<RG Name> /RFName:<RF Name> /MemName:<Member Name> /IsPrimary:True
    Best Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Replicated http sessions : classcastexception

    has anyone else seen this with wls6.1 sp1 when trying to run in memory
              replicated http session?
              -peter
              <Oct 18, 2001 12:01:26 PM EDT> <Error> <HTTP>
              <[WebAppServletContext(7569280,v21
              ,/v21)] Servlet failed with Exception
              java.lang.ClassCastException:
              weblogic.servlet.internal.session.MemorySessionCon
              text
              Start server side stack trace:
              java.lang.ClassCastException:
              weblogic.servlet.internal.session.MemorySessionCon
              text
              at
              weblogic.servlet.internal.session.SessionData.getContext(SessionData.
              java:270)
              at
              weblogic.servlet.internal.session.ReplicatedSessionData.becomeSeconda
              ry(ReplicatedSessionData.java:178)
              at weblogic.cluster.replication.WrappedRO.<init>(WrappedRO.java:34)
              at
              weblogic.cluster.replication.ReplicationManager$wroManager.create(Rep
              licationManager.java:352)
              at
              weblogic.cluster.replication.ReplicationManager.create(ReplicationMan
              ager.java:1073)
              at
              weblogic.cluster.replication.ReplicationManager_WLSkel.invoke(Unknown
              Source)
              at
              weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:296)
              at
              weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.jav
              a:265)
              at
              weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest
              .java:22)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              End server side stack trace
              at
              weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundR
              equest.java:85)
              at
              weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:134)
              at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:35)
              at $Proxy88.create(Unknown Source)
              at
              weblogic.cluster.replication.ReplicationManager.trySecondary(Replicat
              ionManager.java:870)
              at
              weblogic.cluster.replication.ReplicationManager.createSecondary(Repli
              cationManager.java:825)
              at
              weblogic.cluster.replication.ReplicationManager.register(ReplicationM
              anager.java:393)
              at
              weblogic.servlet.internal.session.ReplicatedSessionData.<init>(Replic
              atedSessionData.java:119)
              at
              weblogic.servlet.internal.session.ReplicatedSessionContext.getNewSess
              ion(ReplicatedSessionContext.java:193)
              at
              weblogic.servlet.internal.ServletRequestImpl.getNewSession(ServletReq
              uestImpl.java:1948)
              at
              weblogic.servlet.internal.ServletRequestImpl.getSession(ServletReques
              tImpl.java:1729)
              at jsp_servlet.__login._jspService(__login.java)
              at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
              at
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubIm
              pl.java:265)
              at
              weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubIm
              pl.java:200)
              at
              weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppSe
              rvletContext.java:2456)
              at
              weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestIm
              pl.java:2039)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              >
              

    When do you see this? While shutting down one of the servers?
              Make sure your clustered servers are uniform. ie.the webapp
              you have deployed is deployed on all servers and PersistentType
              is set to "replicated" in all servers.
              --Vinod.
              Peter Ghosh wrote:
              > has anyone else seen this with wls6.1 sp1 when trying to run in memory
              > replicated http session?
              > -peter
              >
              > <Oct 18, 2001 12:01:26 PM EDT> <Error> <HTTP>
              > <[WebAppServletContext(7569280,v21
              > ,/v21)] Servlet failed with Exception
              > java.lang.ClassCastException:
              > weblogic.servlet.internal.session.MemorySessionCon
              > text
              >
              > Start server side stack trace:
              > java.lang.ClassCastException:
              > weblogic.servlet.internal.session.MemorySessionCon
              > text
              > at
              > weblogic.servlet.internal.session.SessionData.getContext(SessionData.
              > java:270)
              > at
              > weblogic.servlet.internal.session.ReplicatedSessionData.becomeSeconda
              > ry(ReplicatedSessionData.java:178)
              > at weblogic.cluster.replication.WrappedRO.<init>(WrappedRO.java:34)
              > at
              > weblogic.cluster.replication.ReplicationManager$wroManager.create(Rep
              > licationManager.java:352)
              > at
              > weblogic.cluster.replication.ReplicationManager.create(ReplicationMan
              > ager.java:1073)
              > at
              > weblogic.cluster.replication.ReplicationManager_WLSkel.invoke(Unknown
              > Source)
              > at
              > weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:296)
              > at
              > weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.jav
              > a:265)
              > at
              > weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest
              > .java:22)
              > at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              > End server side stack trace
              >
              > at
              > weblogic.rmi.internal.BasicOutboundRequest.sendReceive(BasicOutboundR
              > equest.java:85)
              > at
              > weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:134)
              > at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:35)
              > at $Proxy88.create(Unknown Source)
              > at
              > weblogic.cluster.replication.ReplicationManager.trySecondary(Replicat
              > ionManager.java:870)
              > at
              > weblogic.cluster.replication.ReplicationManager.createSecondary(Repli
              > cationManager.java:825)
              > at
              > weblogic.cluster.replication.ReplicationManager.register(ReplicationM
              > anager.java:393)
              > at
              > weblogic.servlet.internal.session.ReplicatedSessionData.<init>(Replic
              > atedSessionData.java:119)
              > at
              > weblogic.servlet.internal.session.ReplicatedSessionContext.getNewSess
              > ion(ReplicatedSessionContext.java:193)
              > at
              > weblogic.servlet.internal.ServletRequestImpl.getNewSession(ServletReq
              > uestImpl.java:1948)
              > at
              > weblogic.servlet.internal.ServletRequestImpl.getSession(ServletReques
              > tImpl.java:1729)
              > at jsp_servlet.__login._jspService(__login.java)
              > at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
              > at
              > weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubIm
              > pl.java:265)
              > at
              > weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubIm
              > pl.java:200)
              > at
              > weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppSe
              > rvletContext.java:2456)
              > at
              > weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestIm
              > pl.java:2039)
              > at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              > >
              

  • Windows VPN clients can't use network servers after 10.5.1 upgrade

    We have two Xserves, both formerly running 10.4.11. One is the OD master, the other a replica. The replica is also the VPN server, and is a DHCP server for the small number of IP addresses reserved for VPN clients.
    The OD master upgrade went fine. I completely reinstalled the OD replica, set the replica up again, and set up the VPN server. It supports L2TP/IPsec connections only.
    After the upgrade, Mac users running Tiger or Leopard can connect to the VPN server and connect to network services without any problems. Windows users can connect, but cannot actually USE anything on my office network. For example, if you try to connect to a web server either by fully qualified domain name or by hostname, the connection from the browser simply times out.
    In the Windows command line I can verify that I have an active connection by pinging and using the tracert command (equivalent of traceroute on UNIX). Hostname resolution works, too. But nothing happens when you try to open a web browser, which is mostly what my users need to do.
    It doesn't matter whether you're logging in with an OD user account or a local account defined solely on the VPN server. Same behavior in Windows.
    I had to take an older XServe running 10.4.11 out of our data center, move it to the office, and set it up on the same external network connection. 10.4.11 server works, 10.5.1 doesn't, from the same Windows client, set up exactly the same way.
    I've been through the hoops with Apple Enterprise support, who now tell me that Engineering kicked it back to them and told them they'd charge me $695 to get it fixed, because it's ostensibly custom configuration work. If that's true, why is Windows XP listed under L2TP/IPSec support on page 127 of the Leopard Network Services Admin guide? I don't want a custom fix, I just want it to work the way it's supposed to work. Or I want Apple to retract the claim that OS X Server is the best workgroup server solution for Macs and Windows.
    Anyone else encounter this problem or know of a fix?

    Had the same problems, started after i tried out the firewall in Leopard server.
    Seems that not all settings are reset even after turning the firewall off.
    To reset the firewall to its default setting:
    1 Disconnect the server from the Internet.
    2 Restart the server in single-user mode by holding down the Command-s keys during
    startup.
    3 Remove or rename the address groups file found at /etc/ipfilter/
    ipaddressgroups.plist.
    4 Remove or rename the ipfw configuration file found at /etc/ipfilter/ipfw.conf.
    5 Force-flush the firewall rules by entering the following in Terminal:
    $ ipfw -f flush
    6 Edit the /etc/hostconfig file and set IPFILTER=-YES-.
    7 Complete the startup sequence in the login window by entering exit:
    The computer starts up with the default firewall rules and firewall enabled. Use Server
    Admin to refine the firewall configuration.
    8 Log in to your server’s local administrator account to confirm that the firewall is
    restored to its default configuration.
    9 Reconnect your host to the Internet.
    This solved the problem for me...

  • Can't bind server to OD, replication broken, users at some sites can't auth

    Hi all,
    Having a doozy of a problem with our OD at the moment, hopefully someone can help
    The setup:
    1 OD master and 3 replicas at head office here, all running 10.5.5 (issue also occurred in 10.5.4)
    Around a dozen remote AFP & SMB file servers, all are setup as "Connected to a directory system"
    Most sites are OK, but we have issues at 2 sites.
    Setting each site to "Connected to a directory system" fails at the Directory Utility stage. We try and add the master (or even some of the replicas), put in the diradmin name and password, and attempt to bind, but it responds with an error after a while.
    The error states that there is already a computer with this name, and prompts to overwrite. Overwriting fails also.
    I did a search and found this:
    http://support.apple.com/kb/TS1245
    and this:
    http://forums.bombich.com/viewtopic.php?t=11834&highlight=lkdc
    But neither tips help
    Attempting to set up the 2 remote servers as replicas stops at the "Enabling Password Server Replication" stage. I can close out of the assistant at this stage and am left with a "broken" replica, which has 2 out of 3 things running:
    LDAP Server: Running
    Password Server: Stopped
    Kerberos: Running
    On the master it says "Password Service Not Found"
    So it seems that setting up replicas on a different subnet doesn't work.
    I've tried getting my ISP to set up a static route for the VPN tunnel, and this worked for a set of blank test servers with no extra users added to them.
    Interestingly, setting a user's password to "Crypt" in WGM allows them to authenticate to the "broken replica" and access their files. Setting their password type to "Open Directory" has no luck at all.
    If I jump on the server and try a password check in the terminal, I get:
    AFP:
    dirt -u username -p password
    Call to dsGetRecordList returned count = 1 with Status : eDSNoErr : (0)
    Call to checkpw(): Bad Password
    path: /LDAPv3/10.10.20.1
    Username: username
    Password: password
    Error : eDSAuthFailed : (-14090)
    SMB:
    dirt -a nt -u username -p password
    Call to dsGetRecordList returned count = 1 with Status : eDSNoErr : (0)
    path: /LDAPv3/10.10.20.1
    Username: username
    Password: password
    Good
    The master's IP is 10.10.20.1.
    Users can sometimes connect via SMB instead of AFP, which is a workaround for now, but I'd like to know why this is happening.
    I've tried setting the AFP server's authentication methods to Standard instead of Any or Kerberos, to no effect.
    Does anyone know why these servers won't bind and won't replicate, and only allow connections if people use Crypt passwords? Is my Kerberos stuffed?

    Well, I tried to demote, then promote my master with no luck. During the upgrade from Tiger to Leopard, the password service cache (or whatever it's called) was somewhat corrupted. My OD Archive failed to restore !! I had to rebuild my entire OD from scratch.
    Now I have slightly less errors. My replicas joined up fine. Kerberos passwords get propagated when a password changes, but samba passwords do not. Samba working is essential as 95% of the clients machines are windows boxes. Here are a few quick tests I did:
    replica1 root# dirt -m /LDAPv3/127.0.0.1 -u fred
    User password:
    2009-01-15 17:13:16.919 dirt[4224:10b] password is : <password>
    Call to dsGetRecordList returned count = 1 with Status : eDSNoErr : (0)
    Username: fred
    Password: <password>
    Success
    replica1 root# dirt -m /LDAPv3/127.0.0.1 -a nt -u fred
    User password:
    2009-01-15 17:13:23.160 dirt[4233:10b] password is : <password>
    Call to dsGetRecordList returned count = 1 with Status : eDSNoErr : (0)
    Username: fred
    Password: <password>
    Error : eDSAuthFailed : (-14090)
    On the master this all works fine.
    The funny thing is that my diradmin account has not problem on the replica.

  • Mac -Mini running at 100C CPU temperature after receiving back from servic

    Hello,
    I just received back my Mac-mini from Apple service.
    As other users reported, the fan was going mad, running at full speed all the time. By using: http://www.bresink.de/osx/TemperatureMonitor.html
    I could see that the temperature-sensor connected to the heat-sink did not show up in the list of sensors. Service replaced the main board, and now the heat-sink sensor shows reasonable readings, and the fan is running silent, at least with no work load.
    However, while performing a little web browsing, I found that the fan still goes wild.
    What I noticed: The "TemperatureMonitor" utility reports that the CPU core temperature goes up beyond 100C/212F, even under mild work load, while the CPU heat sink is still reported at moderate temperatures around 35 C/95F !??? Environment temperature is below 30C.
    Before I give away my MAC for another week or two, I would like to check whether this is usual behavior. Other users report max CPU temp of about 80-90C. Intel specifies max core temperature of 100C. According to the digital thermometer build in, my MAC sometimes exceeds this max spec by Intel.
    Is this usual behavior, or did Apple service not properly install the heat sink after service, such that the CPU is only running with limited cooling ??
    Thank you for your suggestions
    Detlef
    Mac Mini 1.5GHz core solo   Mac OS X (10.4.9)  

    Only an internal investigation would show whether the heatsink was correctly reinstalled, but quite clearly since the temperature being reported is at the operational limit specified for the processor, what you are experiencing is not in any way 'usual'. There are sites with content such as flash animations, which can cause high CPU load and thus push system temperatures up and cause the fan to spin up, but not for prolonged periods of course.
    Since the fan is running up to full speed on your mini as a result of the temperature rising, it is clearly not an issue with MacOS or the fan control cable, and as such I would urge that you return your system to the service provider with a description of the problem and a list of website addresses you have visited which causes these problems so they can set about replicating the issue.

  • Segmentation fault when enabling replication with SQL API

    Hi,
    I've compiled BDB 5.3.21 on Ubuntu 11.04 (x86) with the following configure options:
    ../dist/configure enable-sql enable-sql_compat enable-debug enable-tcl --with-tcl=/usr/lib
    I was able to follow the Replication Usage Examples given in "Getting Started with the SQL APIs" to set up a set of replicated master/client databases.
    However, after I exited out of the dbsql session that started replication on the master database and re-opened the master database with dbsql, executing "pragma replication_initial_master=ON;" followed by "pragma replication=ON;" led to a segmentation fault. gdb showed that the segmentation fault occurred at:
    dbsql> pragma replication=ON;
    Program received signal SIGSEGV, Segmentation fault.
    0x0032d42b in __env_ref_get (dbenv=0x8056ad8, countp=0xbfffd498)
    at ../src/env/env_region.c:772
    772          renv = infop->primary;
    (gdb) list
    767          REGENV *renv;
    768          REGINFO *infop;
    769     
    770          env = dbenv->env;
    771          infop = env->reginfo;
    772          renv = infop->primary;
    773          *countp = renv->refcnt;
    774          return (0);
    775     }
    776     
    (gdb)
    Does anybody know of a solution to this or could this be a bug? Thanks in advance.
    P.S. here's a stack trace:
    (gdb) bt
    #0 0x0032d42b in __env_ref_get (dbenv=0x8056ad8, countp=0xbfffd498)
    at ../src/env/env_region.c:772
    #1 0x001786fc in hasDatabaseConnections (p=0x8056708)
    at ../lang/sql/generated/sqlite3.c:44420
    #2 0x00178a11 in bdbsqlPragmaStartReplication (pParse=0x80648e0,
    pDb=0x80561cc) at ../lang/sql/generated/sqlite3.c:44533
    #3 0x001797f5 in bdbsqlPragma (pParse=0x80648e0,
    zLeft=0x8062e20 "replication", zRight=0x8062dc0 "ON", iDb=0)
    at ../lang/sql/generated/sqlite3.c:44812
    #4 0x001c215d in sqlite3Pragma (pParse=0x80648e0, pId1=0x8064b60,
    pId2=0x8064b70, pValue=0x8064b90, minusFlag=0)
    at ../lang/sql/generated/sqlite3.c:78941
    #5 0x001e5c83 in yy_reduce (yypParser=0x8064b20, yyruleno=256)
    at ../lang/sql/generated/sqlite3.c:96668
    #6 0x001e6761 in sqlite3Parser (yyp=0x8064b20, yymajor=1, yyminor=...,
    pParse=0x80648e0) at ../lang/sql/generated/sqlite3.c:97051
    #7 0x001e7537 in sqlite3RunParser (pParse=0x80648e0,
    zSql=0x80648b8 "pragma replication=ON;", pzErrMsg=0xbfffdba0)
    at ../lang/sql/generated/sqlite3.c:97877
    #8 0x001c730e in sqlite3Prepare (db=0x8056010,
    zSql=0x80648b8 "pragma replication=ON;", nBytes=-1, saveSqlFlag=1,
    ---Type <return> to continue, or q <return> to quit---
    pReprepare=0x0, ppStmt=0xbfffdc8c, pzTail=0xbfffdc88)
    at ../lang/sql/generated/sqlite3.c:80736
    #9 0x001c7739 in sqlite3LockAndPrepare (db=0x8056010,
    zSql=0x80648b8 "pragma replication=ON;", nBytes=-1, saveSqlFlag=1,
    pOld=0x0, ppStmt=0xbfffdc8c, pzTail=0xbfffdc88)
    at ../lang/sql/generated/sqlite3.c:80828
    #10 0x001c7a5e in sqlite3_prepare_v2 (db=0x8056010,
    zSql=0x80648b8 "pragma replication=ON;", nBytes=-1, ppStmt=0xbfffdc8c,
    pzTail=0xbfffdc88) at ../lang/sql/generated/sqlite3.c:80903
    #11 0x0804baf6 in shell_exec (db=0x8056010,
    zSql=0x80648b8 "pragma replication=ON;",
    xCallback=0x804a3c4 <shell_callback>, pArg=0xbfffde14, pzErrMsg=0xbfffdcfc)
    at ../lang/sql/sqlite/src/shell.c:1092
    #12 0x0805030e in process_input (p=0xbfffde14, in=0x0)
    at ../lang/sql/sqlite/src/shell.c:2515
    #13 0x08051453 in main (argc=2, argv=0xbffff3f4)
    at ../lang/sql/sqlite/src/shell.c:2946
    -Irving
    Edited by: snowcrash on Jan 15, 2013 2:16 PM

    Thank you for reporting this. I have been able to reproduce this crash in-house.
    You need to specify the replication_initial_master and replication pragmas as part of initially creating your SQL database. The replication_initial_master pragma is not used or needed after the initial database creation at the initial master site. The replication=on pragma is a persistent setting that we remember internally the first time you specify it. This means that after specifying replication=on for the first time on a site, we will automatically restart replication for you in future dbsql sessions when we open the underlying Berkeley DB environment.
    This gives you a very simple workaround: don't respecify replication=on when you reenter dbsql.
    Of course, we shouldn't be crashing on an unnecessary pragma. I have added this to our list of fixes to consider for the future.
    Thanks,
    Paula Bingham
    Oracle

  • TS3180 Line 7. There is no list to select from. You can create a New one.

    Im having problem problems all over Lion Server. I can't manage Profile Manager on client machines..nothing authenticates.
    So I thourght I would rebuild the Open Directory replica functions....however in recreating an Open Directory I get an error.
    "Cannot replicate a directory with augment user records.
    Your server cannot become a replica of 'server.com' because its directory contains augment user records. Please refer to the Open Directory Administration Guide for more information about this issue."
    How do I get rid of the augment users records?
    by the way..I set this server up per Lynda.com Lion Server essential training. Part of the server functions with File Sharing and users accessing folders..but Profile Manager does not authenticate on client machines.

    Did you find a solutions for this? I am able to set a replica inside the local network, but from the outside I get the augment record message.
    I had this server as a replica before so the ports are open on my router.

  • Loss of Raid metadb after reboot

    Hi,
    I'm building a new Solaris 10 server and am attempting to create a simple Raid1 array with little success. I've read and re-read everything available and believe that I am doing everything correct, however, I completely lose my metadevice state database and hence my raid configuration when I reboot the system.
    I create a number of meta databases on a spare slice and a simple raid 1 array for a drive that I can unmount. The mirror works fine and everything works perfectly well and reorts that it is in a good state. When I reboot my system, the mount fails and on checking, there is no metadbs and no pseudo md devices on the system.
    Anybody know any reasons why my metastat databases are not persistant?
    Following are the commands that I run for this simple example:
    # /opt/orcreleases device d30 (disk 2)
    # ascii name = <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    # pcyl = 14089
    # ncyl = 14087
    # acyl = 2
    # nhead = 24
    # nsect = 424
    # Part Tag Flag Cylinders Size Blocks
    # 0 root wm 0 - 25 129.19MB (26/0/0) 264576
    # 1 swap wu 26 - 51 129.19MB (26/0/0) 264576
    # 2 backup wu 0 - 14086 68.35GB (14087/0/0) 143349312
    # 3 unassigned wm 52 - 71 99.38MB (20/0/0) 203520
    # 4 unassigned wm 72 - 91 99.38MB (20/0/0) 203520
    # 5 unassigned wu 0 0 (0/0/0) 0
    # 6 usr wm 92 - 14086 67.91GB (13995/0/0) 142413120
    # 7 unassigned wu 0 0 (0/0/0) 0
    echo "Ensure that mirror disks have same format as real disks"
    prtvtoc /dev/rdsk/c1t2d0s2 | fmthard -s - /dev/rdsk/c1t3d0s2
    echo "Adding state database replicas"
    metadb -a -c2 /dev/dsk/c1t2d0s3 /dev/dsk/c1t3d0s3
    metadb -a -c2 /dev/dsk/c1t2d0s4 /dev/dsk/c1t3d0s4
    echo "Creating volumes for /dev/dsk/c1t2d0s6 /opt/releases disk"
    metainit -f d36 1 1 c1t2d0s6
    metainit d46 1 1 c1t3d0s6
    metainit d8 -m d36
    echo "One-way mirror created"
    echo "change /etc/vfstab to mount /dev/md/dsk/d76"
    echo "mount /opt/releases"
    metattach d8 d46
    echo "Two-way mirror created"

    Hi,
    Thanks for the reply.
    I should have replied to my own thread really when I worked out what was happening. The code excerpt was slightly misleading in that I had already created another set of replicas on another disk. They were all created and the array was in a healthy state before the reboot. The problem was actually due to a known Solaris bug where the replicas are lost on a reboot if there are 8 or more state replicas defined.
    I simply reduced the number of replicas to 4 and everything works fine.
    Thanks,
    Pete

  • WL 7.0 sp4 session replication problem

              These are the configuration:
              * BEA WebLogic Server 7.0 SP4
              * Domain : mydomain
              * Machine:
              machine1 (Windows 2003)
              machine2 (Windows 2000)
              * Admin Server :
              myserver (on "machine1")
              * Managed Server :
              server1 (on "machine1") , replication group: groupA , prefered secondary replication
              group: groupB
              server2 (on "machine2") , replication group: groupB , prefered secondary replication
              group: groupA
              proxy (on "machine2")
              * Cluster :
              cluster-1 (contains "server1" and "server2")
              I've deployed an application (one EJB and some JSP, just for testing purpose)
              to "cluster-1".
              And I also deployed a "weblogic.servlet.proxy.HttpClusterServlet" application
              to "proxy",
              which is configured to connect "server1" and "server2".
              Now I open a browser and browse to "proxy" and I can see th result from "server1"
              or "server2".
              But when I shutdown the server which my current session is on, I was redirected
              to another server
              but the session is not replicated.
              

    Did you set the replicated option for your webapp. Did you change the
              cookiename by anychance?
              sree
              "patrick" <[email protected]> wrote in message news:40e53734$1@mktnews1...
              >
              > These are the configuration:
              >
              > * BEA WebLogic Server 7.0 SP4
              > * Domain : mydomain
              > * Machine:
              > machine1 (Windows 2003)
              > machine2 (Windows 2000)
              > * Admin Server :
              > myserver (on "machine1")
              > * Managed Server :
              > server1 (on "machine1") , replication group: groupA , prefered
              secondary replication
              > group: groupB
              > server2 (on "machine2") , replication group: groupB , prefered
              secondary replication
              > group: groupA
              > proxy (on "machine2")
              > * Cluster :
              > cluster-1 (contains "server1" and "server2")
              >
              > I've deployed an application (one EJB and some JSP, just for testing
              purpose)
              > to "cluster-1".
              > And I also deployed a "weblogic.servlet.proxy.HttpClusterServlet"
              application
              > to "proxy",
              > which is configured to connect "server1" and "server2".
              >
              > Now I open a browser and browse to "proxy" and I can see th result from
              "server1"
              > or "server2".
              > But when I shutdown the server which my current session is on, I was
              redirected
              > to another server
              > but the session is not replicated.
              

  • Export still valid after approving updates

    Hi,
    I am currently in a process of setting up replica servers for each site. I do this with an export of the content and the database.
    Yesterday I have approved 5 new updates on my WSUS master server.
    My question is can I still use that content + db or do I have to make an export again from the master?
    Thanks in advance,
    Kr,
    Joeri

    Hi Joeri,
    >>Is the export which is made on the master before the approving still valid if I import those on the new replica server?
    The metadata will be overridden at the synchronization. Therefore, the export/import makes no sense. But copying the content folder will reduce the download time.
    >>Will the new replica server download the missing files then?
    Normally, when the update is approved, the downstream server will download the update from the updatestream server.
    Best Regards.
    Steven Lee Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Berkeley DB JE and Sharding

    Hello.
    Does BDB JE support sharding? I wouldn't like that all replicas have the same data in a HA, but just part of the records to improve read operations.
    Thanks.

    Hi,
    No, JE does not support sharding. Replication is intended for failover as well as load balancing, and of course failover requires that the complete data set is replicated. Oracle NoSQL Database, which is built on top of JE, does support sharding. See:
    http://www.oracle.com/technetwork/products/nosqldb/overview/index.html
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for

  • Post install checks failed for DBC file - Oracle App11.5.10.2 Linux  5.3

    Hi everybody, I tried Installing Oracle E-Business Suite 11.5.10.2 on Red Hat Enterprise Linux Server release 5.3 The Post install checks failed for DBC file, HTTP, JSP and PHP. Apparently the DBC hadn’t been created. I had verified the log file, whi

  • Netgear DG834G Problem connecting iMAC G5

    Hi, Wondered if anyone could assist me in this. Netgear Router DG834G / 3 Pcs all set up on Network / added a iMac G5 (Intel/10.4.4) at w/e but with problems - mainly it says it is connected to Netgear but will there is no connection to the internet

  • Numbers: sorting by results

    i have a list, part of which are figures put in manually, others that are the result of a calculation. To sort that column, what I normally did in Excel was to export as txt file, then reimport, and then do the recalculation. What's the way to do it

  • OPC datasocket write problem

    I have a main while loop (10sec timer)with heavy image analysis, and another while loop(0.1sec timer) that simply writes the resulting single number from the first loop into a OPC server via datasocket write function. While it does work, the resultin

  • U310 jumpy touchpad with pluggen ac\dc adapter

    u310 jumpy touchpad with pluggen ac\dc adapter - impossible to work with notebook I have this very strange problem. Ever since getting a replacement AC adapter for my laptop, the mouse has started going all crazy. If the adapter is unplugged, the mou