DSplugin psw on read-only replica

I am running a Directory Server 6.3 which is a master to 2 read-only consumers (one is DS 6.3, one is DS 5.2). Replication is working to both consumers.
On the master server, I have ISW for Windows 6.0.
I am able to sync passwords from AD to the master DS. That is working fine. When a user changes their password in AD; then authenticates against the master DS server, the new password is pulled from AD.
I am having trouble when a password is changed on AD and the user tries to authenticate to the read-only replia. They get an Authentication error. I think I have the DS plugin installed on both of my read-only servers. I can not figure out why they won't send the request to the master server to get the password from AD. pswvalidate does get set to True.
My master server has 2 interfaces (master-if1 and master-if2) and the 6.3 readonly replica can only reach master-if2.
# ./idsync printstat -D "cn=Directory Manager" -w secret -s dc=mycompany,dc=org -q anothersecret
Exploring status of connectors, please wait...
Connector ID: CNN100
Type: Sun Java(TM) System Directory
Manages: dc=mycompany,dc=org (ldap://master-if1.mycompany.org:389) (ldap://master-if2.mycompany.org:389)
State: SYNCING
Installed on: master-if.mycompany.org
Plugin SUBC100 is installed on ldap://master-if1.mycompany.org:389
Plugin SUBC101 is installed on ldap://consumer6.3.mycompany.org:389
Connector ID: CNN101
Type: Active Directory
Manages: mcnc.org (ldap://ad1.mycompany.org:389) (ldap://ad2.mycompany.org:389)
State: SYNCING
Installed on: master.mycompany.org
Sun Java(TM) System Message Queue Status: Started
Checking the System Manager status over the Sun Java(TM) System Message Queue.
System Manager Status: Started
SUCCESS
I see no errors in any log files I can find. I can provide more information if needed.
Thanks, Carole

Is there any experience in an environment with a master DS (6.3) server, read-only consumer servers and Identity Synchronization for Windows?
I am not able to get authentication queries to my ro consumers to poll the master DS which in turns pulls the new password from AD.
Thanks, Carole

Similar Messages

  • Linked Server from SQL 2008 to Connect to 2012 read only replica never works

    I have two Production Database Servers
    1. SQLServer2008 (2 Nodes Cluster)
    2. SQLServer2012 with 2 read only replica (3 Nodes Cluster)
    I would like to draw a line here, We have routing table and URL working perfectly fine. 
    We have tested LINKED Server from 2012 Box to production Server by Specifying APPLICATIONINTENT = ReadOnly; it works perfectly fine, the routing is being used.
    When we create linked server from SQL Server 2008 Box (Please note we have installed SQL Server 2012 Client tools on this box and Restarted the Servers) using the below script
    USE [master]
    EXEC master.dbo.sp_dropserver @server=N'AGL1', @droplogins='droplogins'
    GO
    EXEC master.dbo.sp_addlinkedserver @server = N'AGL1'
    ,@datasrc='AGL1'
    ,@provider='SQLNCLI11'
    ,@provstr='ApplicationIntent=ReadOnly;Database=AdventureWorks2012'
    Linked Server is created, Now when I run the Query 
    exec ('select @@servername') at AGL1
    It always brings the Primary READ/WRITE node name only, after lots of research I found that, this linked Server is always using SQL Native Client 10.0 only, even after creating Linked Server using SNC 11, That is the reason it is not going to routing
    table. Its always connecting to Primary node.
    Below is the way I found it, on 2012 Production Server I executed below Query
    SELECT session_id, protocol_type, driver_version = 
    CASE SUBSTRING(CAST(protocol_version AS BINARY(4)), 1,1)
    WHEN 0x70 THEN 'SQL Server 7.0'
    WHEN 0x71 THEN 'SQL Server 2000'
    WHEN 0x72 THEN 'SQL Server 2005'
    WHEN 0x73 THEN 'SQL Server 2008'
    ELSE 'SQL Server 2012' 
    END,client_net_address ,client_tcp_port,local_tcp_port ,T.text
    FROM sys.dm_exec_connections
    CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle) AS T
    The help is taken from msdn (Link Provided below) for the above Query
    https://msdn.microsoft.com/en-us/library/dd339982.aspx
    and I found it always uses SQL Server 2008 SNC.
    My Question is, is there a way to force SQL Server to use SQL Server Native Client 11.
    Has anyone tried this setup?
    Thank you in advance.

    Unfortunately no, there is no other way of forcing it without restart. The SQL Server stack has no idea of the existing Native Client as it booted prior its installation. And you cannot force "DLL reload" without proper service restart. 
    Ivan Donev MCT and MCSE Data Platform

  • How to unlock account on read only replica (DS 5.2 p4)

    We are planning to turn on password policy to lock account after user failed to provide correct password after n times and the account will be lock forever unless administrator is to reset the password retry count.
    We implemented password policy with role and cos so that the policy only imposed on end users but not administrators. The password policy works fine.
    We understand that for DS 5.2 p.4, the password retry count is per instance, so the account lock is per instance. The problem we now encountered is at account unlock. We developed a function to reset the password retry count in order to unlock the account. It works fine in our test env. However, in production, we have 2 masters and 4 replicas and our replicas are all read only, all update is referred to the 2 masters. Now, when we update the master setting 0 to the password retry count, the reset is not populated to the replicas and when we try update the replica directly, the update get referred to the master and hence the attribute on the replica remains the same.
    Is there a way to unlock the account that got locked at the read only replica?

    As I recalled the way to unlock an account is to reset password using admin account. I think Ludovic once mentioned that in one of his post.
    If I were right here in this point, you should reset password which will be replicated to your read-only account and reset the counter to be 0.
    BTW, just found Ludovic's original post link:
    http://forum.java.sun.com/thread.jspa?forumID=761&threadID=5159009
    Message was edited by sun_iplanet
    sun_iplanet
    Message was edited by sun_iplanet
    sun_iplanet

  • When users authenticate to read-only replica [ Identity synchronization]

    Hello,
    I have 2 sites: F and L. each site has a AD and LDAP. AD are replicated. LDAP are also replicated. Each one is the slave of the other. Idsync is installed on each site too.
    All users servers are located in F. so when a user authenticates for the 1st time or after password change, he will challenge LDAP in F and this one is read-only (slave). The user will get invalid password
    Whereas if I do for example ldapsearch + authentication on LDAP in L (this can't be done for users), the windows password gets updated in LDAP in L then LDAP in F (since F is a slave).
    do you a solution for that?
    thx

    hi,
    the replication is working between master and consumer:
    If I change an attribute in ldap A for a user in site A, the attribute is replicated to ldap in site B.
    If I change an attribute in ldap B for a user in site B, the attribute is replicated to ldap in site A.
    If I change an attribute in ldap A for a user in site B, I get an error that this is a read-only replica. OK.
    If I change an attribute in ldap B for a user in site A, I get an error that this is a read-only replica. OK.
    The password is getting updated in the consumer following a password change on the master.
    Where is the problem then? When a user in site A wants to change his password, his password is updated only in AD. ldap in site A (and IdSync) will not be aware of this change since the user in site A will login to servers (ldap clients) in site B and those servers are configured with ldap in site B. the ldap in site B is a slave for the subtree of users in site A. It stores then the password that is in ldap site A i.e. an invalid password.
    I imagine a solution where servers (ldap clients) are configured with both ldap servers so that if a user from site A logins, the ldap client challenges the ldap server in site A. is this feasible?
    any other solution?
    thank you,

  • SQL Server 2012 - Wat Is The Best Solution For Creating a Read Only Replicated/AlwaysOn Database

    Hi there I was wondering if someone may have a best recommendation for the following requirement I have with regards setting up a third database server for reporting?
    Current Setup
    SQL Server 2012 Enterprise setup at two sites (Site A & Site B).
    Configured to use AlwaysOn Availability groups for HA and DR.
    Installed on Windows 2012 Servers.
    This is all working and failover works fine and no issues. So…
    Requirement
    A third server needs to be added for the purpose of reporting, to be located on another site (Site C) possibly in another domain. This server needs to have a replicated read only copy of the live database from site A or Site B, whichever is in use. The Site
    C reporting database should be as up-to-date to the Site A or Site B database as possible – preferably within a few seconds anyway….
    Solution - What I believe are available to me
    I believe I can use AlwaysOn and create a ReadOnly replica for the Site C. If so do I assume Site C needs to have the Enterprise version of SQL server i.e. to match Site A & Site B?
    Using log shipping which if I am correct means the Site C does not need to be an Enterprise version.
    Any help on the best solution for this would be greatly appreciated.
    Thanks, Steve

    for always on - all nodes should be part of one windows cluster..if there site C is on different domain - I do not think it works.
    Logshipping works --as long as the sql on site C is is same or higher version(sql 2012 or above).  you can only do read only.
    IMHo, if you can make site C in the same domain then, Always is better solution else log shipping
    also, if your database has enterprise level features such as - partitonin, data compression -- you cannot restore the database on lower editions- so you need to have enterprise edition.
    Hope it Helps!!

  • Read Only SQL 2012 Replica DB on a SQL 2014 box

    I have set up a data availability group between two SQl 2012 boxes on my windows cluster. they are failing over fine and synchronized as expected.
    I want to add an additional read only replica on my SQL 2014 box which is also included in my windows cluster (so i can use this replica for reporting with 2014 In Memory features) the fail-over is still between the two 2012 boxes. 
    Problem is even tho i have set this replica to read only - yes, the database is always in - Synchronized / In recovery - so i can never connect to it.
    Is this scenario possible?

    Thanks David
    Yes the whole point of this was to get access to real time data from within 2014, but the more i think about it the linked server still would not give me what i want as I belive the in memory is defined at table creation level.
    With the tables being created in 2012 that would be a problem.
    Thanks for the info mate.

  • Read-only agent synching to a Data Guard physical standby?

    Hi all,
    we are trying to use TimesTen 11.2.2.4.1 as a read-only memory cache for an Oracle 11.2.3.0.7 schema on Linus RedHat 6.3, while using Oracle Data Guard to replicate the Oracle instance over geographically remote sites. On each site we would like to have two TT instances synchronizing with the local Oracle 11g instance. This works fine against the master DB, but are the TT agents going to be able to synchronize against physical standby instances?
    The problem it seems is that the TT agent uses dedicated structures in the Oracle master instance (related to the cache grid), which are going to be replicated into the standby instances. Is  the TT agent able to use the read-only, replicated structures to complete synchronization, or is this approach unworkable? What would be your advise as how to achieve this?
    Thanks for your help,
    Chris

    Hi again,
    so after testing a little bit it appears that this approach works indeed, at least against a limited number of manual DML operations. What I needed to do on the slave instance to have it working is the following:
    1 - Entirely exclude TTADMIN and TIMESTEN schemas from the Data Guard replication:
    ALTER DATABASE STOP LOGICAL STANDBY APPLY;
    execute dbms_logstdby.skip(stmt => 'SCHEMA_DDL', schema_name => 'TTADMIN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'SCHEMA_DDL', schema_name => 'TIMESTEN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'DML', schema_name => 'TTADMIN', object_name => '%');
    execute dbms_logstdby.skip(stmt => 'DML', schema_name => 'TIMESTEN', object_name => '%');
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    2 - Erase both schemas from the local instance:
    DROP USER TTADMIN CASCADE;
    DROP USER TIMESTEN CASCADE;
    CREATE USER TTADMIN etc
    3 - Temporarily disable the database guard while creating the local ttCache structures, as the scripts seem to need to set a table-level lock on the source table:
    ALTER DATABASE GUARD NONE;
    ttIsql> CREATE READONLY CACHE GROUP etc
    ALTER DATABASE GUARD STANDBY;
    4 - Unset the "Fire_Once_Only" property for the local TTADMIN triggers:
    execute dbms_ddl.set_trigger_firing_property(trig_owner=> 'TTADMIN', trig_name=> 'TT_06_70560_T', fire_once => FALSE);
    At that point the cache seems to replicate properly in the most simple cases. I will try to test with some substantial load and against DG failovers to see how this behaves.
    Regards,
    Chris

  • DFS-R hub server - which way to make read-only?

    Hi all,
    This is more of a technical clarification than a problem we're having...
    Firstly, we have a bunch of file servers in various EMEA regions, which are set up to replicate back to a central hub server at our UK datacentre in order to take backups. These spoke servers are the only location where the data is changed.
    We are in the middle of migrating the hub server over to Windows Server 2008R2, meaning we can use the supported one-way replication method instead of disabling one end of the member connections.
    My question is this:
    Which member do we need to make read-only? This would have seemed obvious to me, however there is some confusion amongst our systems engineers.
    I would have said that you make the spoke member read-only, as you don't want the hub replicating anything back to it. This makes perfect sense to me, but not everyone is so sure.
    Could someone please clarify?

    I think it is what I want to achieve - many remote spoke/writeable downstream servers talking with one central/hub server so only this hub has the content of them all - but what do You mean saying "configure
    your DFS targets ... such that the HubServerB is disabled"? Isn't it by making it read-only?
    I've just found this -
    Make a Replicated Folder Read-Only on a Particular Member (http://technet.microsoft.com/en-us/library/dd759239.aspx).
    Well. There is a clear explanation that users cannot add or change files on read-only replicated folder but all three examples show remote servers as read-only while hub server is always writeable, isn't it?
    RODC example is clear - for safety reason no user-made changes are allowed on remote DC but it gets changes from writeable (central/hub) DC/DC's.
    Reports files generated and changed only on hub server - OK. But I can imagine an opposite situation where branch offices are responsible for producing and changing reports which are read-only in company's headquarter. Do the remote offices do it all on the
    hub server? Then the reports are replicated to remote read-only member servers? What for?
    But keeping installation files on remote servers is not understandable for me. I would rather keep them on central/hub server and propagate them to remote servers and make hub server read-only to prevent from backward replication.
    I'm not a native English speaker so maybe I don't get the exact sense of this article. Somebody help me, please?
    Krzysztof

  • Stuck on ISW installation step - Config of other master and Read only

    I'm trying to figure out how to complete one of the last steps for my isw install. Stuck on this step:
    Configure the Sun Directory Server plugin on every other master and read-only replica that manage users under o=myorganization.

    2011 Macs do not have a PPC processor.   They have an Intel processor.  Apple menu -> System Preferences -> Energy Saver allows you to modify the sleep settings.
    Is your data backed up?

  • Read-only problem

    I'm completely new to jdeveloper (v 9.0.3.5) with the OA extension components. I'm having problems when I have copied the various xml & java class files from the server to my local PC, replicating the structure on the server. When I then add these into my project the BC come in as read-only, so do all the files within the BC. If I create a new BC, this is fine and fully editable. I am obviously missing a trick here. Can anyone shed any light on this?
    Thanks Jeanette

    Hi,
    1. Select the project folder in the exporer and set the "Read-Only" proprty to false.
    2. ReImport the  Project once again.
    Regards, Anilkumar

  • Load balancing not happending but fail over is for Read only Entity beans

              The following are the configuration.
              Two NT servers with WL5.1 sp9 having only EJBs(Read only entity beans)
              One Client with WL5.1 sp9 having servlet/java application as
              EJB client.
              I am trying to make a call like findbyprimarykey in one of the
              entity bean. I could see the request is being directed only to the one of the
              server always. When I bring that server, fail over is happening to the other server.
              Here are the settings I have in the ejb-jar.xml :
                        <entity>
                             <ejb-name>device.StartHome</ejb-name>
                             <home>com.wl.api.device.StartHome</home>
                             <remote>com.wl.api.device.StartRemote</remote>
                             <ejb-class>com.wl.server.device.StartImpl</ejb-class>
                             <persistence-type>Bean</persistence-type>
                             <prim-key-class>java.lang.Long</prim-key-class>
                             <reentrant>False</reentrant>
                             <resource-ref>
                                  <res-ref-name>jdbc/wlPool</res-ref-name>
                                  <res-type>javax.sql.DataSource</res-type>
                                  <res-auth>Container</res-auth>
                             </resource-ref>
                        </entity>
              Here are the settings I have in the weblogic-ejb-jar.xml.
              <weblogic-enterprise-bean>
                        <ejb-name>device.StartHome</ejb-name>
                        <caching-descriptor>
                             <max-beans-in-cache>50</max-beans-in-cache>
                             <cache-strategy>Read-Only</cache-strategy>
                             <read-timeout-seconds>900</read-timeout-seconds>
                        </caching-descriptor>
                        <reference-descriptor>
                             <resource-description>
                                  <res-ref-name>jdbc/wlPool</res-ref-name>
                                  <jndi-name>weblogic.jdbc.pool.wlPool</jndi-name>
                             </resource-description>
                        </reference-descriptor>
                        <enable-call-by-reference>False</enable-call-by-reference>
                        <jndi-name>device.StartHome</jndi-name>
                   </weblogic-enterprise-bean>
              Am I doin any mistake in this?
              Any one's help is appreciated.
              Thanks
              Suresh
              

    we are using 5.1
              "Gene Chuang" <[email protected]> wrote in message
              news:[email protected]...
              > Colocation optimization occurs if your client resides in the same
              container (and also in the same
              > EAR for 6.0) as your ejbs.
              >
              > Gene
              >
              > "Suresh" <[email protected]> wrote in message
              news:[email protected]...
              > > Ok....the ejb-call-by-reference set to true is making the call to one
              server
              > > only. i am not sure why it is. I removed the property name and it
              works.
              > > Also I have one question, in our prduct environment, when i cache the
              ejb
              > > home it is not doing the load balancing. can any one help me for that.
              > > thanks
              > >
              > > Mike,
              > > From the sample pgm I sent, even from single client calls get load
              > > balanced.
              > >
              > > Suresh
              > >
              > >
              > > "Gene Chuang" <[email protected]> wrote in message
              > > news:[email protected]...
              > > > In WL, LoadBalancing will ONLY WORK if you reuse your EJBHome! Take
              your
              > > StartEndPointHome lookup
              > > > out of your for loop and see if this fixes your problem.
              > > >
              > > > I've seen this discussion in ejb-interest, and some other vendor
              (Borland,
              > > I believe it is), brings
              > > > up an interesting point: Clustering and LoadBalance is not in the
              J2EE
              > > specs, hence implementation
              > > > is totally up to the vendor. Weblogic loadbalances from the remote
              > > interfaces (EJBObject, EJBHome,
              > > > etc..), while Borland loadbalances from JNDI Context lookup.
              > > >
              > > > Let me suggest a third implmentation: loadbalance from BOTH Context
              > > lookup as well as stub method
              > > > invocation! Or create a smart replica-aware list manager which
              persists
              > > on the client thread
              > > > (ThreadLocal) and is aware of lookup/evocation history. Hence if I do
              the
              > > following in a client
              > > > hitting a 3 node cluster, I'll still get perfect round-robining
              regardless
              > > of what I do on the
              > > > client side:
              > > >
              > > > InitialContext ctxt = new InitialContext();
              > > > EJBHome myHome = ctxt.lookup(MY_BEAN);
              > > > myHome.findByPrimaryKey(pk); <== hits Node #1
              > > > myHome = ctxt.lookup(MY_BEAN);
              > > > myHome.findByPrimaryKey(pk); <== hits Node #2
              > > > myHome.findByPrimaryKey(pk); <== hits Node #3
              > > > myHome = ctxt.lookup(MY_BEAN);
              > > > myHome.findByPrimaryKey(pk); <== hits Node #1
              > > > ...
              > > >
              > > >
              > > > Gene
              > > >
              > > > "Suresh" <[email protected]> wrote in message
              > > news:[email protected]...
              > > > > Mike ,
              > > > >
              > > > > Do you have any reasons for the total number of machines to be 10.
              > > > >
              > > > > I tried with 7 machines.
              > > > >
              > > > >
              > > > > Here is my sample client java application running individual in the
              > > seven
              > > > > machines.
              > > > >
              > > > > StartEndPointHome =
              > > > > (StartEndPointHome)ctx.lookup("dev.StartEndPointHome");
              > > > > for(;;)
              > > > > {
              > > > > // logMsg(" --in loop "+currentTime);
              > > > > if (currentTime > nextRefereshTime)
              > > > > {
              > > > > logMsg("****- going to call");
              > > > > currentTime=getSystemTime();
              > > > > nextRefereshTime=currentTime+timeInterval;
              > > > > StartEndPointHome =
              > > > > (StartEndPointHome)ctx.lookup("dev.StartEndPointHome");
              > > > > long rndno=(long)(Math.random()*10)+range;
              > > > > logMsg(" going to call remotestub"+rndno);
              > > > > retVal =
              > > > >
              > >
              ((StartEndPointHome)getStartHome()).findByNumber("pe"+rndno+"_mportal_dsk36.
              > > > > mportal.com");
              > > > >
              > > > > logMsg("**++- called stub");
              > > > > }
              > > > >
              > > > >
              > > > >
              > > > > The range value is different for each of the machines in the
              cluster.
              > > > >
              > > > > If the first request starts at srv1, all request starts hitting the
              same
              > > > > server.
              > > > > If the first request starts at srv2, all request starts hitting the
              same
              > > > > server.
              > > > >
              > > > > I have the following for the url , user and pwd values for the
              context
              > > .
              > > > >
              > > > > public static String url="t3://10.11.12.14,10.11.12.117:8000";
              > > > > public static String user="guest";
              > > > > public static String password="guest";
              > > > >
              > > > >
              > > > >
              > > > > It would be great if you could help me.
              > > > >
              > > > > Thanks
              > > > > suresh
              > > > >
              > > > >
              > > > > "Mike Reiche" <[email protected]> wrote in message
              > > > > news:[email protected]...
              > > > > >
              > > > > > If you have only one client don't be surprised if you only hit one
              > > server.
              > > > > Try
              > > > > > running ten different clients and see if the hit the same server.
              > > > > >
              > > > > > Mike
              > > > > >
              > > > > >
              > > > > > "suresh" <[email protected]> wrote:
              > > > > > >
              > > > > > >The following are the configuration.
              > > > > > >
              > > > > > > Two NT servers with WL5.1 sp9 having only EJBs(Read only entity
              > > beans)
              > > > > > >
              > > > > > > One Client with WL5.1 sp9 having servlet/java application as
              > > > > > > EJB client.
              > > > > > >
              > > > > > >
              > > > > > >I am trying to make a call like findbyprimarykey in one of the
              > > > > > >entity bean. I could see the request is being directed only to
              the
              > > one
              > > > > > >of the
              > > > > > >server always. When I bring that server, fail over is happening
              to
              > > the
              > > > > > >other server.
              > > > > > >
              > > > > > >
              > > > > > >Here are the settings I have in the ejb-jar.xml :
              > > > > > > <entity>
              > > > > > > <ejb-name>device.StartHome</ejb-name>
              > > > > > > <home>com.wl.api.device.StartHome</home>
              > > > > > > <remote>com.wl.api.device.StartRemote</remote>
              > > > > > > <ejb-class>com.wl.server.device.StartImpl</ejb-class>
              > > > > > > <persistence-type>Bean</persistence-type>
              > > > > > > <prim-key-class>java.lang.Long</prim-key-class>
              > > > > > > <reentrant>False</reentrant>
              > > > > > > <resource-ref>
              > > > > > > <res-ref-name>jdbc/wlPool</res-ref-name>
              > > > > > > <res-type>javax.sql.DataSource</res-type>
              > > > > > > <res-auth>Container</res-auth>
              > > > > > > </resource-ref>
              > > > > > > </entity>
              > > > > > >
              > > > > > >
              > > > > > >Here are the settings I have in the weblogic-ejb-jar.xml.
              > > > > > >
              > > > > > ><weblogic-enterprise-bean>
              > > > > > > <ejb-name>device.StartHome</ejb-name>
              > > > > > >
              > > > > > > <caching-descriptor>
              > > > > > > <max-beans-in-cache>50</max-beans-in-cache>
              > > > > > > <cache-strategy>Read-Only</cache-strategy>
              > > > > > > <read-timeout-seconds>900</read-timeout-seconds>
              > > > > > > </caching-descriptor>
              > > > > > >
              > > > > > > <reference-descriptor>
              > > > > > > <resource-description>
              > > > > > > <res-ref-name>jdbc/wlPool</res-ref-name>
              > > > > > > <jndi-name>weblogic.jdbc.pool.wlPool</jndi-name>
              > > > > > > </resource-description>
              > > > > > > </reference-descriptor>
              > > > > > > <enable-call-by-reference>False</enable-call-by-reference>
              > > > > > > <jndi-name>device.StartHome</jndi-name>
              > > > > > > </weblogic-enterprise-bean>
              > > > > > >
              > > > > > >
              > > > > > >Am I doin any mistake in this?
              > > > > > >
              > > > > > >Any one's help is appreciated.
              > > > > > >Thanks
              > > > > > >Suresh
              > > > > >
              > > > >
              > > > >
              > > >
              > > >
              > >
              > >
              >
              >
              

  • Internal card reader thinks SD card is in "read only" mode??

    I just took some pictures with my digital camera which use SD cards.
    I then took the card out of my camera and put it into the built in SD reader on my 13" MacBook Pro. (same way as I always do)
    However, the Finder thinks my card is in "read only mode" as I can not delete or add pictures on the card. I took out the card and toggled the read-only switch thinking maybe it got stuck or something, and tried it again. Still no luck.
    Then, I tried a USB card reader. That reader detected the card, and let me read and write to the card.
    This is the first time this has ever happened.
    Any ideas why the internal card reader wouldn't anything other then read the card while my USB card reader could read and write no problem?
    Thanks,
    Scott

    I reset the PRAM but it did not help.
    I think I found the solution though.
    It depends on the way you insert the card.
    You can't just slide it in. Rather, you have to push it in a special way (which I can't describe or remember)
    So I guess the answer is, if the card comes up as read-only, take out the card, and push it in a different way until to comes up correctly.
    I replicated this issue twice so that is why I believe it is the alignment that is the problem.
    If it gets any worse, I'll probably have to get the slot serviced but considering I only use the slot once a month or so, its not top priority. Too bad I couldn't just have an extra USB port, or an express card slot - but apparently Apple thinks SD cards are more important...

  • DFSR Read-Only - Problems with Disaster Recovery?

    Hi guys,
    I have (2) 2008 R2 file servers.  One is production and one is for DR.  First, I have to make sure that the DR server never writes back to production in any situation.  The means that if the (2) servers stop communicating with each other
    and data is deleted off the production server, the read-only DR server does not put deleted files back onto the production server when connection is restored, etc.  It sounds like I am covered there.
    The question is what happens if production server crashes and I now want to change the DR server to production?  Does the data on the D/R member still have the same NTFS permissions?  It sounds like it does, but would I then just go into dfsmgmt.msc
    and mark the replicated folder as read-write, force AD replication, run a dfsrdiag pollad, and then redirect the users to the DR server?  I know the content may not be 100%, but the back-up plan has always been to disablestrictnamechecking, change
    the host record for the production server to the IP address of the DR server, and redirect users there for 99% of the data.  I think that files that were open on the production server that crashed would not have had their updates replicated across. 
    Let me know if that would work from the DFS side of things
    Dan Heim

    Hi Dan,
    I think this article provided an answer to your question:
    Read-Only Replication in R2
    http://blogs.technet.com/b/askds/archive/2010/03/08/read-only-replication-in-r2.aspx
    An RW replicated folder can be converted to an RO replicated folder (and back) “on the fly”.
    Converting will cause a non-authoritative sync to occur on the replicated folder for the server being altered. 
    If you have any feedback on our support, please send to [email protected]

  • The options to replicate a secondary read-only copy of a big database with limited network connection?

    There is a big database on remote server. A read-only replicate is required on a local server. The data can only be transferred via FTP, etc. It's ok to replicate it once a day.
    Logshipping is an option. However, it need to kill all the connections when doing restoring. What's the other options (pros/cons)? How about merge repl or .Net sync framework?

    Hi
    ydbn,
    Do you need to update data on the local server and propagate those changes to remote server? If no, you can use log shipping or transaction replication achieve your requirement.  It doesn’t need to kill all the connections if you
    clear the Disconnect users in the database when restoring backups check box when configuring log shipping,
    With transaction replication, the benefits are as follows.
    Synchronization. This method can be used to keep multiple subscribers synchronized in real time.
    Scale out. Transactional replication is excellent for scenarios in which read-only data can be scaled
    out for reporting purposes or to enable e-commerce scalability (such as providing multiple copies of product catalogs).
    There are a few disadvantages of utilizing transaction replication, including:
        • Schema changes/failover. Transactional subscribers require several schema changes that impact foreign keys and impose other constraints.
        • Performance. Large-scale operations or changes at the publisher might require a long time to reach subscribers.
    However, if you need to update data on the local server and propagate those changes to remote server, merge replication
     is more appropriate, and it comes with the following advantages:
        • Multi-master architecture. Merge replication does allow multiple master databases. These databases can manage their own copies of data and marshal those changes as needed between other members of
    a replication topology.
        • Disconnected architecture. Merge replication is natively built to endure periods of no connectivity, meaning that it can send and receive changes after communication is restored.
        • Availability. With effort on the part of the developers, merge-replicated databases can be used to achieve excellent scale-out and redundancy options.
    Merge replication comes with some disadvantages, including:
        • Schema changes. Merge replication requires the existence of a specialized GUID column per replicated table.
        • Complexity. Merge replication needs to address the possibility for conflicts and manage operations between multiple subscribers, which makes it harder to manage. For more details, please review this
    article.
    For the option of sync framework, I would like to recommend you post the question in the Sync Framework forums at
    https://social.msdn.microsoft.com/Forums/en-US/home?category=sync . It is appropriate and more experts will assist you. Also you can check this
    article about introduction to Sync Framework database synchronization.
    Thanks,
    Lydia Zhang
    If you have any feedback on our support, please click
    here.
    Lydia Zhang
    TechNet Community Support

  • Iomega external hard drive either 'not found' by Time Machine or is now in 'read only' format

    I don't know what's going on with my iomega external hard drive.  Sometimes the HD is recognised by the computer; other times it isn't; if it isn't I can restore to an earlier back up via Time Machine app; but it's like the disk isn't writable.
    Ten days ago, I had trouble backing up my iMac using Time Machine.  After turning it off and restarting my external hard drive, it backed up successfully.
    However, today I am trying to back up my files, and TM tells me it can't find the external hard drive.  Nothing has changed -- I have been out of town this past week -- but somehow my HD now seems to be in 'read only' format.  It does turn on, and I could restore my computer to an earlier back up.
    "Mac OS X can't repair the disk.  You can still open or copy files on the disk, but you can't save changes to files on the disk.  Back up the disk and reformat it as soon as you can."
    Annoyingly, despite my HD not showing in Finder or on my Desktop (as it usually does), when I simply turn it off I get that warning message "The disk was not ejected properly".
    Have tried verifying and repairing using Disk Utility, to no avail. 
    Am prompted to reformat but I don't have a good understanding of what this means and how to do it.  I'm guessing this would wipe the disk clean and I'd have to create the initial back up image all over again -- I'm reluctant to do this since this HD is my only back up and if it all goes tits up I'll be up that famous creek without a paddle.
    Thoughts, suggestions, input all welcome and deeply appreciated -- thank you!

    fzgy wrote:
    "Mac OS X can't repair the disk.  You can still open or copy files on the disk, but you can't save changes to files on the disk.  Back up the disk and reformat it as soon as you can."
    It's possible a heavy-duty 3rd-party disk repair app can fix it, but they're expensive (DiskWarrior is about $100), and there's no guarantee it can do it.
    Am prompted to reformat but I don't have a good understanding of what this means and how to do it.
    That will erase it.  See Time Machine - Frequently Asked Question #5.
    It sounds very much like the disk is beginning to fail, although it's possible there's a bad port, cable, connection or power supply (if it has its own).
    I'd suggest getting a new one, and using it for your Time Machine backups; once you have a good backup there, reformat the old one (and select Security Options to write zeros to the whole drive -- if that fails, you know the drive is toast).  Use it for secondary backups, per FAQ #27.   If it has failed, get a second new one for secondary backups. 

Maybe you are looking for

  • How to change the endeca studio chart color . version is 3.1

    how to change the endeca studio chart color

  • Need Help: Complex SQL statement

    select segment3, 0 "Other Income"      , 0 "Sales of Services"      , 0 "Personnel Costs"      , 0 "Other Staff Cost"      , 0 "General Admin"      , 0 "Travel"      , 0 "Collaborator"      , 0 "Training"      , 0 "Capital"           , nvl(round(sum

  • Check the feasibility of dataflow

    Hi, I am under the process of performance optimization of BI system, in this process I will also change the dataflow from 3.x to 7.0. Now, I have to check the feasibility, whether flow can be upgraded for given specific  target. So, is there any meth

  • LabVIEW Remote Monitoring

    I have a existing LabVIEW application written in 7.1 that I would like to install at a remote site and monitor/control it remotely.  The application is a control system with slow response time, so communication speed isn't a priority.  The system is

  • Small Query.......

    Hi Friends, I Start Studying Java in Oracle Database, I have created a simple java class and java function as given in the developer guide. But i want to know how exceptions are handled in a java Stored procedure and Functions. can anyone give me a e