IPlanet DS5.1 SP1 Replication Problem
Hi:
I am running iPlanet Direc Serv 5.1 SP1 on Solaris 8 boxes. I have a single master and a couple of consumers. Sometimes replication stops working in that the master binds to the clients and according to the consumer logs, it does a search and a couple of extended operations but no MOD operation to actually send any data across.
I am replicating two suffixes and they seem to work or not independently. I've tired rebooting the master and the consumer, and even reinitializing the consumer, which it does just fine, but it still won't send data updates. No errors are logged on either the master or the consumer, and the master's replication status indicates no errors, but also shows that no changes are being sent.
I don't know if this is pertinent, but the master is configured as a legacy consumer and gets its data from a DS4.12 server.
The symptoms are very similar to what was reported by Chandra Mouli in June 2002. Unfortunately the response then was wait for the next version. If anyone has any thoughts on what I am doing wrong that is causing thsi bug, I would greatly appreciate hearing from you. Thanks.
Russ
Hi Olivier,
Usually, the error will occurs when you upgrade to the SQL Server 2012 SP2. We can upgrade to SQL Server 2012 SP2 with CU1. According to your description, the error still exists.
As a workaround we suggested that change the column type for “last_local_recgen” in table named as “sysmergesubscriptions” both for Published and Subscribed databases. Microsft had added an additional 3 columns to a replication table named 'dbo.sysmergesubscriptions'
in SP2 and there is one column having a wrong datatype. You can use the below script to fix the datatype.
if exists (select * from sys.columns sc inner join sys.types st on sc.system_type_id = st.system_type_id
where object_id = object_id('dbo.sysmergesubscriptions') and sc.name = 'last_local_recgen' and st.name = 'uniqueidentifier')
begin
begin transaction
alter table dbo.sysmergesubscriptions drop column last_local_recgen
alter table dbo.sysmergesubscriptions add last_local_recgen bigint null
Commit transaction
End
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support
Similar Messages
-
Re: [iPlanet-JATO] sp3 jsp compiler problem
Weiguo,
First, Matt is correct, the regular expression tool is perfect for general text
substitution situations, and as a completely independent tool its use is not
restricted to migration situations (or file types for that matter).
Second, I sympathize with the unfortunate trouble you are experiencing due to
Jasper's (perhaps more strict) compilation, but in what way did the iMT
automated translation contribute to these inconsistencies that you cited?
1. Changed the case of the tag attribute to be the same as what's
defined in tld.
example: changed OnClick to onClick
The iMT does not generate any OnClick or onClick clauses per se. In a
translation situation, the only way "OnClick" would have been introduced was if
it had been part of the pre-existing project's "extraHTML" (which was written
by the original customer and just passed through unchanged by the iMT) or if it
was added manually by the post-migration developer.
2. Removed attributes which are not defined in tld.
example: escape attribute only defined in three tags
but in some pages, it's used although it's not defined as an
attribute
of certain tags. The jasper compiler doesn't like it.Can you give soem examples? Is there a definite pattern? Again, this might be
similar to the OnClick situation described above?
3. In an end tag, there can't be any space.
example: </content > doesn't work. </content> works.
Again, the content tag would never have been generated by the iMT. There was no
equivalent in the NetDynamics world, so any content tags in your code must have
been introduced by your developers manually. Its a shame that jasper is so
particular, but the iMT could not help you out here even if we wanted to. The
constants that are used by the iMT are defined in
com.iplanet.moko.jsp.convert.JspConversionConstants. From what I can see, the
only situation of a closing tag with any space in it is
public static final String CLOSE_EMPTY_ELEMENT = " />";
But that should not cause the type of problem you are referring to.
Mike
----- Original Message -----
From: Matthew Stevens
Sent: Thursday, September 06, 2001 10:16 AM
Subject: RE: [iPlanet-JATO] sp3 jsp compiler problem
Weiguo,
Others will chime in for sure...I would highly recommend the Regex Tool from
the iMT 1.1.1 for tackling this type of problem. Mike, Todd and myself have
posted to the group (even recently) on directions and advantages of creating
your own RULES (rules file) in XML for arbitary batch processing of source.
matt
-----Original Message-----
From: weiguo.wang@b...
[mailto:<a href="/group/SunONE-JATO/post?protectID=125056020108194190033029175101192165174144234026000079108238073194105057099246073154180137239239223019162">weiguo.wang@b...</a>]
Sent: Thursday, September 06, 2001 12:25 PM
Subject: [iPlanet-JATO] sp3 jsp compiler problem
Matt/Mike/Todd,
We are trying to migrate to sp3 right now, but have had a lot of
issues with the new jasper compiler.
The following workaround has been employed to solve the issues:
1. Changed the case of the tag attribute to be the same as what's
defined in tld.
example: changed OnClick to onClick
2. Removed attributes which are not defined in tld.
example: escape attribute only defined in three tags
but in some pages, it's used although it's not defined as an
attribute
of certain tags. The jasper compiler doesn't like it.
3. In an end tag, there can't be any space.
example: </content > doesn't work. </content> works.
As I see it, we have two options to go about solving this problem:
1. Write a script which will iterate through all the jsp files and
call jspc on them. Fix the errors manually when jspc fails. Jspc will
flag the line number where an error occurs.
2. Write a utility which scans the jsp files and fix the errors when
they are encountered. We should define what's an error and how to
correct it. It's best if we combine this with solution 1 since we
might miss an error condition.
Actually, there might be another option, which is seeking help from
you guys since you have better understanding of JATO and iAS. Can you
do anything to help us?
We would be happy to hear your thoughts.
At last, I would like to suggest modifying the moko tool so that
these rules are enforced and the generated JSPs work with the new
compiler. This is for the benefit of any new migration projects.
Thanks a lot.
Weiguo
[email protected]
Choose from 1000s of job listings!
[email protected]
[Non-text portions of this message have been removed]Thanks a lot Matt and Mike for your prompt replies.
I agree completely that iMT doesn't introduce the inconsistencies.
About the three cases I mentioned, the third one happens only in
manually created JSPs. So it has nothing to do with iMT. The first
two are mainly due to the existing HTML code, as you rightly pointed
out.
The reason I made the suggestion is since we know that case 1 and 2
won't pass the japser compiler in sp3, we have to do something about
it. The best place to do this, in my mind, is iMT. Of course, there
might be some twists that make it impossible or difficult to do this
kind of case manipulation or attribute discard.
Weiguo
--- In iPlanet-JATO@y..., "Mike Frisino" <Michael.Frisino@S...> wrote:
Weiguo,
First, Matt is correct, the regular expression tool is perfect for general text substitution situations, and as a completely independent
tool its use is not restricted to migration situations (or file types
for that matter).
>
Second, I sympathize with the unfortunate trouble you are experiencing due to Jasper's (perhaps more strict) compilation, but
in what way did the iMT automated translation contribute to these
inconsistencies that you cited?
>
1. Changed the case of the tag attribute to be the same as what's
defined in tld.
example: changed OnClick to onClick
The iMT does not generate any OnClick or onClick clauses per se. In a translation situation, the only way "OnClick" would have been
introduced was if it had been part of the pre-existing
project's "extraHTML" (which was written by the original customer and
just passed through unchanged by the iMT) or if it was added manually
by the post-migration developer.
>
2. Removed attributes which are not defined in tld.
example: escape attribute only defined in three tags
but in some pages, it's used although it's not defined as an
attribute
of certain tags. The jasper compiler doesn't like it.Can you give soem examples? Is there a definite pattern? Again, this might be similar to the OnClick situation described above?
>
>
3. In an end tag, there can't be any space.
example: </content > doesn't work. </content> works.
Again, the content tag would never have been generated by the iMT. There was no equivalent in the NetDynamics world, so any content tags
in your code must have been introduced by your developers manually.
Its a shame that jasper is so particular, but the iMT could not help
you out here even if we wanted to. The constants that are used by the
iMT are defined in
com.iplanet.moko.jsp.convert.JspConversionConstants. From what I can
see, the only situation of a closing tag with any space in it is
public static final String CLOSE_EMPTY_ELEMENT = " />";
But that should not cause the type of problem you are referring to.
Mike
----- Original Message -----
From: Matthew Stevens
Sent: Thursday, September 06, 2001 10:16 AM
Subject: RE: [iPlanet-JATO] sp3 jsp compiler problem
Weiguo,
Others will chime in for sure...I would highly recommend the Regex Tool from
the iMT 1.1.1 for tackling this type of problem. Mike, Todd and myself have
posted to the group (even recently) on directions and advantages of creating
your own RULES (rules file) in XML for arbitary batch processing of source.
>
matt
-----Original Message-----
From: weiguo.wang@b...
[mailto:<a href="/group/SunONE-JATO/post?protectID=125056020108194190033029175101192165174048139046">weiguo.wang@b...</a>]
Sent: Thursday, September 06, 2001 12:25 PM
Subject: [iPlanet-JATO] sp3 jsp compiler problem
Matt/Mike/Todd,
We are trying to migrate to sp3 right now, but have had a lot of
issues with the new jasper compiler.
The following workaround has been employed to solve the issues:
1. Changed the case of the tag attribute to be the same as
what's
defined in tld.
example: changed OnClick to onClick
2. Removed attributes which are not defined in tld.
example: escape attribute only defined in three tags
but in some pages, it's used although it's not defined as an
attribute
of certain tags. The jasper compiler doesn't like it.
3. In an end tag, there can't be any space.
example: </content > doesn't work. </content> works.
As I see it, we have two options to go about solving this problem:
>>
1. Write a script which will iterate through all the jsp files and
call jspc on them. Fix the errors manually when jspc fails. Jspc will
flag the line number where an error occurs.
2. Write a utility which scans the jsp files and fix the errors when
they are encountered. We should define what's an error and how to
correct it. It's best if we combine this with solution 1 since we
might miss an error condition.
Actually, there might be another option, which is seeking help from
you guys since you have better understanding of JATO and iAS. Can you
do anything to help us?
We would be happy to hear your thoughts.
At last, I would like to suggest modifying the moko tool so that
these rules are enforced and the generated JSPs work with the new
compiler. This is for the benefit of any new migration projects.
Thanks a lot.
Weiguo
[email protected]
Choose from 1000s of job listings!
[email protected]
Service.
>
>
>
[Non-text portions of this message have been removed] -
How can i solve this replication problem between TT and ORACLE
Hi
I have an application that using AWT cashgroup implement the replication between TT (7.0.6.7) and ORACLE(10g);
but i encounter this problem:
16:16:50.01 Err : REP: 2302682: ABM_BAL_WH:meta.c(4588): TT5259: Failed to store Awt runtime information for datastore /abm_wh/abm_bal_ttdata/abm_bal_wh on Oracle.
16:16:50.02 Err : REP: 2302682: ABM_BAL_WH:meta.c(4588): TT5107: TT5107: Oracle(OCI) error in OCIStmtExecute(): ORA-08177: can't serialize access for this transaction rc = -1 -- file "bdbStmt.c", lineno 3726, procedure "ttBDbStmtExecute()"
16:16:50.02 Err : REP: 2302682: ABM_BAL_WH:receiver.c(5612): TT16187: Transaction 1316077016/357692526; Error: transient 0, permanent 1
the isolation level of my date store is read-committed ,and the sys.ODBC.INI file is also set Isolation=1(readcommitted mode)
so ,I still wonder how the error ORA-08177!
how can i solve this replication problem?
thank you.I suspect this is failing on an UPDATE to the tt_03_reppeers table on Oracle. I would guess the TT repagent has to temporarily use serializable isolation when updating this table. Do you have any other datastores with AWT cachegroups propagating into the same Oracle database? Or can you identify if some other process is preventing the repagent from using serializable isolation? If you google ORA-08177 there seem to be ways out there to narrow down what's causing the contention.
-
Session in-memory replication problem
Hi,
I am running into some cluster HttpSession replication problems. Here is
the scenario where replication fails (all servers mentioned here are a
part of a cluster).
1a - 2 Weblogic servers (A&B) are running - no users logged in,
2a - user logs in and a new session in server A is created.
3a - after several interactions, server A is killed.
4a - after user makes susequent request, Weblogic correctly fails over
to server B
Problem: Not entire session data is replicated. The authentication info
seems to
be replicated correctly but there are some collections in the session of
server A
that did not make it to the session in server B.
The interesting part is this: If there is only one server A running to
begin with and a user
interacts with it for a while and only then server B is started, when
after server B starts up
server A dies - the entire session (which is exactly the same as in the
failing scenario) is
corretly replicated in B, including collections that were missing in the
failing scenario.
How can this be possible ????
Thanks for any info on this one - it really puzzles me.
Andrew
Yes, you are on the right track. Everytime you modify the object you should call
putValue. We will make it more clear in the docs.
- Prasad
Andrzej Porebski wrote:
> Everything is Serilizable. I get no exceptions. I did however read some old
> posts regarding
> session replication and I hope I found an answer. It basically seems to boil
> down to what
> triggers session sync-up between servers. In my case , I store an object into
> session and
> later on manipulate that object directly wihotu session involvment and the
> results of those manipulations
> are not replicated - no wonder if HttpSession's putValue method is the only
> trigger.
> Am i on the right track here?
>
> -Andrew
>
> Prasad Peddada wrote:
>
> > Do you have non serializable data by any chance?
> >
> > - Prasad
> >
> > Andrzej Porebski wrote:
> >
> > > Hi,
> > > I am running into some cluster HttpSession replication problems. Here is
> > > the scenario where replication fails (all servers mentioned here are a
> > > part of a cluster).
> > > 1a - 2 Weblogic servers (A&B) are running - no users logged in,
> > > 2a - user logs in and a new session in server A is created.
> > > 3a - after several interactions, server A is killed.
> > > 4a - after user makes susequent request, Weblogic correctly fails over
> > > to server B
> > >
> > > Problem: Not entire session data is replicated. The authentication info
> > > seems to
> > > be replicated correctly but there are some collections in the session of
> > > server A
> > > that did not make it to the session in server B.
> > >
> > > The interesting part is this: If there is only one server A running to
> > > begin with and a user
> > > interacts with it for a while and only then server B is started, when
> > > after server B starts up
> > > server A dies - the entire session (which is exactly the same as in the
> > > failing scenario) is
> > > corretly replicated in B, including collections that were missing in the
> > > failing scenario.
> > >
> > > How can this be possible ????
> > >
> > > Thanks for any info on this one - it really puzzles me.
> > >
> > > Andrew
> >
> > --
> > Cheers
> >
> > - Prasad
>
> --
> -------------------------------------------------------------
> Andrzej Porebski
> Sailfish Systems, Ltd. Phone 1 + (212) 607-3061
> 44 Wall Street, 17th floor Fax: 1 + (212) 607-3075
> New York, NY 10005
> -------------------------------------------------------------
-
Hi Friends,
I have windows server 2003 domain at two location before somw month back its replication data
and its working fine but now i unable to see replicate data i mean i thing having replication problem
i gotted some evint id error on server
Event id error :- 5002 , 4202 , 1925 ,13568
Please help me .
Thanks,
MadhukarThe 4202 is staging quota size is too small.
Run these 2 Power Shell commands to determine the correct Staging Quota size:
$big32 = Get-ChildItem DriveLetter:\FolderName -recurse | Sort-Object length -descending | select-object -first 32 | measure-object -property length –sum
$big32.sum /1gb
Take that resulting number, round it up to the nearest whole integer, mulitply that times 1024 and enter that number on the Staging tab of the Properties of a replicated folder in DFS Mgt.
More info here:
http://blogs.technet.com/b/askds/archive/2007/10/05/top-10-common-causes-of-slow-replication-with-dfsr.aspx
Run this command to tell you the status of Replication:
wmic /namespace:\\root\microsoftdfs path DfsrReplicatedFolderInfo get replicatedFolderName, State
0: Uninitialized
1: Initialized
2: Initial Sync
3: Auto Recovery
4: Normal
5: In Error
Let us know how that goes. -
Replication problem with iPlanet directory server 5.1 SP2 HF1
If I make a apply a change to either of consumer servers for an entry that belongs to the large database, that change does get applied to the consumer targated but it can not refer the change to teh master. Neither the master, nor the other consumers get updated consequently. I did not have this problem with directory server 5.1 SP1. I only see this problem after I apply directory server 5.1 SP2 HF1.
From the error log file, I see the following message:
NSMMReplicationPlugin - repl_set_mtn_referrals: could not set referrals for replicaI have a suggestion - try another means for administering your directory - use the console only for maintenance and tuning purposes. There are several products out there that are much better for day to day operations ...
Otherwise - I think with 5.1 the view is based on the rdn of the entries - and I am not sure it is customizable. Additionally I know 5.2 solved your second issue - maybe the latest SP of 5.1 has solved it as well - though I don't really know ...
-Chris Larivee -
Weblogic 7.0 sp1 cluster - session replication problem
Hi,
I have installed Weblogic 7.0 sp2 on Win NT. To test clustering
feature, I have installed one admin server and added two managed
servers. All are running on same box. I could deploy web application
to the cluster. Connection pools and every other resource is working
well with the cluster. However I couldn't get session replication to
work. I have modified web app descriptor, and set 'persistent store
type' to "replicated".
I accessed application from one managed server, in the middle of
session I modified the port number in the URL to point to other
managed server. It looks like second managed server has no idea of
that session, my app fails because of this.
Could you please help me out in this, Do I need to do any thing in
addition to the above. I couldn't find much in the BEA manual..
Thanks
Rao
For Web application like servlets/JSP, it is better to put one web server as proxy
plugin before your two managed servers and access your application through web
proxy. (You need set session as in-memory replicated either in weblogic.xml or
by console editor). Otherwise, you need record the session cookie from the first
serevr and send the cookie to the second server (not sure if it works). To access
EJB/JMS, use cluster URL like t3://server1:port1,server2:port2.
[email protected] (Rao) wrote:
>Hi,
>
>I have installed Weblogic 7.0 sp2 on Win NT. To test clustering
>feature, I have installed one admin server and added two managed
>servers. All are running on same box. I could deploy web application
>to the cluster. Connection pools and every other resource is working
>well with the cluster. However I couldn't get session replication to
>work. I have modified web app descriptor, and set 'persistent store
>type' to "replicated".
>
>I accessed application from one managed server, in the middle of
>session I modified the port number in the URL to point to other
>managed server. It looks like second managed server has no idea of
>that session, my app fails because of this.
>
>
>Could you please help me out in this, Do I need to do any thing in
>addition to the above. I couldn't find much in the BEA manual..
>
>
>Thanks
>Rao
-
SCCM 2012 after SP1 replication link has failed
Hi guys,
after install SP1 on SCCM 2012 I have problem with Global Data Replication status and Site Data replication status on P02. Where I have C01 and P01, P02, P03, P04. P02 is the only one which is broken more. Rest of primary sites have ok
Global Data Replication status.
Site P02 is after 1 week from patch console still in Read Only mode rcmctrl.log says: The current site status: ReplicationMaintenance.
CAS C01 have in rcmctrl.log
ERROR: Replication group "General_Site_Data" has failed to initialize for subscribing site C01, setting link state to Error.
Retry for process replication is requested.
Cleaning the RCM inbox if there are any *.RCM files for further change notifications....
Rcm control is waiting for file change notification or timeout after 60 seconds.
No connector role installed
DRS change application started.
Site is NOT active, so not calling the DrsActivation procedures on DRSSite queue.
replmgr.log says nothing special to be mentioned.
and when I run Replication Link Analyzer on C01 against Link to P02 it says :\
ReplicationAnalysis.xml:
Replication initialization is in progress for replication groups EndpointProtection_Site, Medium_Priority_Site, Operational_Data, General_Site_Data, MDM_SiteSpecial, MDM_Site. Found replication groups Replication Configuration that needs to be re-initialized.
Reason code: ReinitializeSenderDespoolerError. Replication initialization is in progress for replication groups Site Control Data, Configuration Data, Registration Data, Language Neutral, Asset Intelligence Knowledge Base, EndpointProtection_ThreatMetaData,
Alerts, Group Relationships. "
Target site P02 is in maintenance mode. Not checking transmission messages sent to queue ConfigMgrDRSQueue. Target site P02 is in maintenance mode. Not checking transmission messages sent to queue ConfigMgrDRSQueue
Target site C01 is in maintenance mode. Not checking transmission messages sent to queue ConfigMgrDRSQueue. Target site C01 is in maintenance mode. Not checking transmission messages sent to queue ConfigMgrDRSQueue
so I need to somehow quit this maintenance mode. following this link did not helped me:
http://blogs.msdn.com/b/minfangl/archive/2012/05/16/tips-for-troubleshooting-sc-2012-configuration-manager-data-replication-service-drs.aspx
I restarted all these 5 servers with no luck.
thank you for any hint/ help
Jaromir
Jaromirin my case in rcmctr.log is visible on CAS and primary site this message: ERROR: Replication group "General_Site_Data" has failed to initialize for subscribing site C01, setting link state to Error.
but today we moved in this situation and my P02 2nd primary site fall out from maintenance mode and a lot of replication happened but General_Site_data still remain in problem.
steps taken were: goto monitoring node > database replication > choose link which has problem and on the bottom check Initialization Detail
you can see which replication Group has problem.
Then in SQl server management studio we run following command with relevant name of Sites and Replication Group which are affected in my case
select * from RCM_DrsInitializationTracking
where replicationGroup = 'General_Site_data' AND SiteFulfilling='P02'
it will show you status
then this update query will force that replication group to start replicate(parameter 7):
we run this many times against same or different replication groups which ahd problems
update RCM_DrsInitializationTracking set InitializationStatus = 7 where ReplicationGroup = 'general_site_data' and SiteRequesting = 'C01' and SiteFulfilling = 'P02'
it's still not win but we are moving forward. And it doesn't has to be same issue like you have but it can point you to somewhere...
J.
Jaromir -
Tablespace Replication Problem - high disk I/O
Hi.
I'm doing some R&D on Oracle Streams. Have setup Tablespace replication between 2 10g R2 instances. data seems to replicate between the 2. These 2 instances have no applications running off of them apart from OEM and queries I run using SQL Developer and SQL*PLUS.
The problem i'm seeing is that since setting up and switching on this replication config disk I/O is high.I'm using windows Performance Monitor to look at
- % Disk time = 100%
- Avg Disk Writes/sec = 20
- Avg Disk Reads/sec = 30
- CPU % = 1
- % Commited Mem = 40%
To me this just looks/sounds wrong.
This has been like this for about 24hrs.
OEM ADDM report says "Investigate the cause for high "Streams capture: waiting for archive log" waits. Refer to Oracle's "Database Reference" for the description of this wait event. Use given SQL for further investigation." I haven't found any reference to this anywhere.
Anybody got any ideas on how to track this one down? Where in my db's can I look for more info?
Platform details:
(P4, 1GB RAM, IDE disk) x 2
Windows Server 2003 x64 SP1
Oracle 10.2.0.1 Enterprise x64
Script used to setup replication:
set echo on;
connect streamadmin/xxxx;
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => '"STM_QT"',
queue_name => '"STM_Q"',
queue_user => '"STREAMADMIN"');
END;
--connect streamadmin/xxxx@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=oratest2.xxxx.co.uk)(PORT=1521)))(CONNECT_DATA=(SID=oratest2.xxxx.co.uk)(server=DEDICATED)));
connect streamadmin/xxxx@oratest2;
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => '"STM_QT"',
queue_name => '"STM_Q"',
queue_user => '"STREAMADMIN"');
END;
connect streamadmin/xxxx;
create or replace directory "EMSTRMTBLESPCEDIR_0" AS 'D:\ORACLE\DATA\ORATEST1';
DECLARE
t_names DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
BEGIN
t_names(1) := '"ADFERO_TS"';
DBMS_STREAMS_ADM.MAINTAIN_TABLESPACES(
tablespace_names => t_names,
source_directory_object => '"DPUMP_DIR"',
destination_directory_object => '"DPUMP_DIR"',
destination_database => 'ORATEST2.xxxx.CO.UK' ,
setup_streams => true,
script_name => 'Strm_100407_1172767271909.sql',
script_directory_object => '"DPUMP_DIR"',
dump_file_name => 'Strm_100407_1172767271909.dmp',
capture_name => '"STM_CAP"',
propagation_name => '"STM_PROP"',
apply_name => '"STM_APLY"',
source_queue_name => '"STREAMADMIN"."STM_Q"',
destination_queue_name => '"STREAMADMIN"."STM_Q"',
log_file => 'Strm_100407_1172767271909.log',
bi_directional => true);
END;
/ok dont know why this didn't work before but here are the results.
select segment_name, bytes from dba_segments where owner='SYSTEM' and segment_name like 'LOGMNR%' ORDER BY bytes desc
SEGMENT_NAME BYTES
LOGMNR_RESTART_CKPT$ 14680064
LOGMNR_OBJ$ 5242880
LOGMNR_COL$ 4194304
LOGMNR_I2COL$ 3145728
LOGMNR_I1OBJ$ 2097152
LOGMNR_I1COL$ 2097152
LOGMNR_RESTART_CKPT$_PK 2097152
LOGMNR_ATTRIBUTE$ 655360
LOGMNRC_GTCS 262144
LOGMNR_I1CCOL$ 262144
LOGMNR_CCOL$ 262144
LOGMNR_CDEF$ 262144
LOGMNR_USER$ 65536
160 rows selected
select segment_name, bytes from dba_extents where segment_name=upper( 'logmnr_restart_ckpt$' );
SEGMENT_NAME BYTES
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 65536
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
LOGMNR_RESTART_CKPT$ 1048576
29 rows selectedMessage was edited by:
JA
Message was edited by:
JA -
In memory replication problems when I bring up a new server
I've got in memory replication set up for 6.1. It works fine if I have 2 servers
up and 1 goes down.
However, if I have 1 server up and a bring a second server up, the sessions blow
out.
E.g. I've got server A and server B.
Both are up, both have sessions. As new sessions come in, they are replicated over
to the other server.
now I bring server B down. All sessions on B fail over to A.
so far so good.
However when I bring server A back up some of the sessions fail as soon as the server
is back up.
Is this a configuration issue, is this a know problem?
This worked fine in weblogic 5.1. In 5.1 when I brought an instance back up, everything
worked fine.
It turns out the problem was caused by using an old version of the Apache Plugin.
This problem occurred while using the 5.1 apache plugin with WLS 6.1.
Once we realized we were using the wrong plugin and swithced to the 6.1 plugin, the
problem went away.
-
JNDI replication problems in WebLogic cluster.
I need to implement a replicable property in the cluster: each server could
update it and new value should be available for all cluster. I tried to bind
this property to JNDI and got several problems:
1) On each rebinding I got error messages:
<Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
to bind an object under the name example.TestName in the jndi tree. The
object you have bound java.util.Date from 10.1.8.114 is non clusterable and
you have tried to bind more than once from two or more servers. Such objects
can only deployed from one server.>
<Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
example.TestName for the object java.util.Date from 10.1.9.250 under the
bind name example.TestName in the jndi tree.>
As I understand this is a designed behavior for non-RMI objects. Am I
correct?
2) Replication is still done, but I got randomly results: I bind object to
server 1, get it from server 2 and they are not always the same even with
delay between operation in several seconds (tested with 0-10 sec.) and while
it lookup returns old version after 10 sec, second attempt without delay
could return correct result.
Any ideas how to ensure correct replication? I need lookup to return the
object I bound on different sever.
3) Even when lookup returns correct result, Admin Console in
Server->Monitoring-> JNDI Tree shows an error for bound object:
Exception
javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
Unresolved:'example' ; remaining name ''
My configuration: admin server + 3 managed servers in a cluster.
JNDI bind and lookup is done from stateless session bean. Session is
clusterable and deployed to all servers in cluster. Client invokes session
methods throw t3 protocol directly on servers.
Thank you for any help.It is not a good idea to use JNDI to replicate application data. Did you consider
using JMS for this? Or JavaGroups (http://sourceforge.net/projects/javagroups/) -
there is an example of distibuted hashtable in examples.
Alex Rogozinsky <[email protected]> wrote:
I need to implement a replicable property in the cluster: each server could
update it and new value should be available for all cluster. I tried to bind
this property to JNDI and got several problems:
1) On each rebinding I got error messages:
<Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
to bind an object under the name example.TestName in the jndi tree. The
object you have bound java.util.Date from 10.1.8.114 is non clusterable and
you have tried to bind more than once from two or more servers. Such objects
can only deployed from one server.>
<Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
example.TestName for the object java.util.Date from 10.1.9.250 under the
bind name example.TestName in the jndi tree.>
As I understand this is a designed behavior for non-RMI objects. Am I
correct?
2) Replication is still done, but I got randomly results: I bind object to
server 1, get it from server 2 and they are not always the same even with
delay between operation in several seconds (tested with 0-10 sec.) and while
it lookup returns old version after 10 sec, second attempt without delay
could return correct result.
Any ideas how to ensure correct replication? I need lookup to return the
object I bound on different sever.
3) Even when lookup returns correct result, Admin Console in
Server->Monitoring-> JNDI Tree shows an error for bound object:
Exception
javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
Unresolved:'example' ; remaining name ''
My configuration: admin server + 3 managed servers in a cluster.
JNDI bind and lookup is done from stateless session bean. Session is
clusterable and deployed to all servers in cluster. Client invokes session
methods throw t3 protocol directly on servers.
Thank you for any help.--
Dimitri -
Iplanet webserver 4.1 JSP problem
Hi I am using iplanet webserver 4.1, whenever i change my jsp file, i have to restart my webserver instance otherwise ia gm getting error..is ther any way i can prevent to restart the webserver?
this is the script.
mv /logs/https-`uname -a`-443/access /logs/https-`uname -a`-443/access.yesterday
compress /logs/https-`uname -a`-443/access.yesterday
/opt/netscape/server4/https-`uname -a`-443/restart
the moving and compressing works properly but the restart command doesn't actually kill the and restart the httpd process, and doesn't create the access or error log.
The restart is a graceful restart will not ask for password, the same setting was working fine for a long time. it suddenly started to give this problem.
the webserver is working fine(taking requests ,etc) except that the log files are not being written.
i did a truss and the data collected is above i see that it is giving error #91 ERESTART many times
there is another server with same settings which is working fine the only difference is that its patch level is lower. -
Apache + 2 Tomcats session replication problem.
Greetings everyone.
Before stating the problem, let me explain how my environment is set.
I have two machines. One (PC1) running Apache (HTTP server 2.0.58)
and one instance of Tomcat (5.0.28) and another machine (PC2) with
another instance of Tomcat(5.0.28).
The Apache server
It is configured to handle static content, to redirect dynamic content to a
Tomcat instance through AJP 1.3 connector.
This process is done through the mod_jk and the workers.properties
The workers.properties file is configured to have sticky_session = True
so it assigns a SESSION_ID to the same Tomcat it was first assigned.
The workers.properties file is configured to have
sticky_session_force = True so if the Tomcat the SESSION_ID was
assigned is not available, the server answers with a 500 error.
The Tomcat servers
Both have only the AJP 1.3 connector enabled
Both have the Cluster tag from the server.xml file uncommented
and the useDirtyFlag flag set to false, for not to allow SESSION
replication between Tomcats.
The workers.properties file
workers.apache_log=C:/Apache2/logs
workers.tomcat_home=C:/Tomcat5
workers.java_home=C:/j2sdk1.4.2_13
ps=/
#Defining workers -----------------------------
worker.list=balancer,jkstatus
#Defining balancer ---------------------------
worker.balancer.type=lb
worker.balancer.balance_workers=tel1, tel2
worker.balancer.sticky_session=True
worker.balancer.sticky_session_force=True
worker.balancer.method=B
worker.balancer.lock=O
#Defining status -----------------------------
worker.jkstatus.type=status
worker.jkstatus.css=/jk_status/StatusCSS.css
#Workers properties ---------------------------
worker.tel1.type=ajp13
worker.tel1.port=8009
worker.tel1.host=127.0.0.1
worker.tel1.lbfactor=1
worker.tel1.socket_keepalive=False
worker.tel1.socket_timeout=30
worker.tel1.retries=20
worker.tel1.connection_pool_timeout = 20
#worker.tel1.redirect=tel2
worker.tel1.disabled=False
worker.tel2.type=ajp13
worker.tel2.port=8009
worker.tel2.host=199.147.52.181
worker.tel2.lbfactor=1
worker.tel2.socket_keepalive=False
worker.tel2.socket_timeout=30
worker.tel2.retries=20
worker.tel2.connection_pool_timeout = 20
#worker.tel2.redirect=tel1
worker.tel2.disabled=False
THE PROBLEM
I open a browser in the jk-status page to see how the Tomcat instances are
working, and both are working fine: Stat -> OK, now as the
loadbalancing factor is 1 on both Tomcats, an even alternating session
distribution is set.
While this browser is open to keep an eye on the status, I open a new
browser (B1)to connect to my Web Application, Apache answers
correctly and gives me a SESSION_ID for Tomcat instance 1 [both
instances are OK], if I make a simple refresh, my SESSION_ID is still the
same so I'm assigned to Tomcat instance 1 but this time I get an
ERROR 503 - Service unavailable but looking at the status of the
Tomcat instances both instances are still OK, no-one is down. And it
stays throwing this error for as many refreshes i do.
Now, I open a new browser (B2)and do the same process as before,
as expected, Apache now gives me a SESSION_ID for Tomcat instance 2,
repeating the same refreshing process, the error is thrown again, but still at
the jk-status page, both instances are fine.
Without closing these windows, I make a new refresh try on B1 and
even though the jk-status says both Tomcat instances are OK, the error
is still thrown. I open a third one (B3), and Apache again, correctly
gives me a new SESSION_ID for Tomcat instance 1 and answers
correctly on the first call. But once again if i repeat the refreshing process, the
error is thrown again.
Note: Using a different resolution to always keep and eye on the
instances status and using a refresh rate of 1 second for status, both
servers always were OK.
So the main problem is that somehow when the session is replicated
to the same tomcat, Apache confuses and thinks it is not available, when
asking it through the jk-status it tells it is OK
I've been trying different configurations with both Apache and Tomcat,
but there must be something missing since I don't get it to work correctly
Thanks in advance for all your helping comments.
- @alphazygmaWhew... that was quite an answer... definitely is going to help him a lot. Yeah any n00b by now should know how to use google, but that's not the point in this forums, here we are to help each other. and wether you like it or not many of us deploy applications to tomcat and stumble on this. So dont try to be cool posting this kind of answers like google this or google that if you dont have an answer please dont comment you will appear to be more noobish than you aparently are.
Well enough talking.
I found the following useful: (it comes in the server.xml of the tomcat configuration)
<!-- You should set jvmRoute to support load-balancing via JK/JK2 ie :
<Engine name="Standalone" defaultHost="localhost" debug="0" jvmRoute="jvm1">
-->
Enabling that entry on both machines should be enough.
Aparently the problem is not with apache. is with tomcat since it can't retain the session apache gives.
more information in the Tomcat help at:
http://tomcat.apache.org/tomcat-5.0-doc/balancer-howto.html#Using%20Apache%202%20with%20mod_proxy%20and%20mod_rewrite -
CRM-BW replication problem.
Hi friends,
I have edited existing Standard CRM extractor with all requried steps in CRM Dev system
Later i have activated in RSA6 in CRM Dev system and pressed Data source Transport button and further saved in CRM Dev system.
Later in BW Dev system i have created new Infosource relevent to this extractor, but It is showing old CRM extratcor without any changes.
Please let me know how can we make all new changes replication to BW Dev system.
I have also, performed replication in BW using RSA1 and also using a programe but No use.
Is i am missing any step after RSA6 in CRM?
Plz suggest.
Thanks.HI,
Symptom
Inconsistency in the assignment of the info-object for the Territory extraction in One order datasources
If the one order Infosources are activated then the mapping of the Datasource fields TERR_GUID and PATH_GUID are assigned to the same infoobject 0CRM_TR. Therefore, activation of the infosource causes an error.
Other terms
Sales Analytics, Service Analytics, One order data Extraction, Territory extraction
Reason and Prerequisites
PATH_GUID field of the Datasource is not marked as field hidden in BW
Solution
Please note that this correction is available from the following CRM Releases
CRM 4.0 SP08
The correction is relevant for the following datasources
0CRM_LEAD_H
0CRM_LEAD_I
0CRM_OPPT_ATTR
0CRM_OPPT_H
0CRM_OPPT_I
0CRM_QUOTATION_I
<b>0CRM_QUOTA_ORDER_I</b>
0CRM_SALES_CONTR_I
0CRM_SALES_ORDER_I
0CRM_SRV_CONFIRM_H
0CRM_SRV_CONFIRM_I
0CRM_SRV_CONTRACT_H
0CRM_SRV_PROCESS_H
0CRM_SRV_PROCESS_I
In order to have this functionality available, please re-activate the datasource in the OLTP (CRM system) and replicate the datasource in the BW systems
The correction is the following:
The Datasource had the fields Territory Guid and Path Guid. The path guid should be marked as Field hidden in BW.
Or in other words in the datasource maintenance in the transaction BWA1, for the extract structure tab, the field PATH_GUID should have the value for Selection as 'A'.
If you still have any problems, do let me know.
regards,
ravi -
CRM 5.0 & IS-U EHP4 Replication Problem
Hi all,
in the context of an upgrade of our IS-U (ECC 6.0) to EhP4 we experience an issue during the replication of contracts (Object SI_CONTRACT) from IS-U into CRM (CRM 5.0). As we didn't find any trace of a similar issue somewhere (Google, SAP Notes) I'll breifly describe the issues as well as our solution blow.
After the upgrade the contract BDocs sent from IS-U to CRM (e.g. after performing a move out for a BP) would appear to be processed successfully (green icon in SMW01). However when analysing the BDocs in detail we noticed that they contained no data. Strangely enough, request load for contracts from CRM still worked seamlessly. After some debugging we identified that the issue was that the BDocs (more precisely the BAPIMTCS structure) sent from IS-U contained structure names that where not expected from the mapping module in CRM. The underlying reason was that in table TBE31 the entry for the event IBSSICON had been changed from EECRM_CONTRACT_COLLECT_DATA to ECRM_CONTRACT_COLLECT_DATA.
This table is read in the function module EECRM_DELTA_DOWNLOAD_IBSSICONT. The entry for the event IBSSICON determines which function modules are used to collect the contract data in IS-U and also which function modules are used to preform the mapping to the BAPIMTCS structures.
Changing the entry back to the initial contents solved our problem. After the change the BDocs where filled and processed correctly. This fix seems to be necessary for all CRM version < 5.2.
Christian
Edited by: Christian Drumm on Sep 29, 2010 9:00 AM
Included information on CRM 5.2Hi Gobi,
Thank you for advice. But:
I've created fields not by using AET - I used documentation that Nicolas suggested me above.
I've enhanced tables and structures that were mentioned there with my z-fields for both sides (CRM and ERP).
Also I looked CRMC_BUT_CALL_FU for CRM Inbound BUAG_MAIN - corresponding standard FMs are marked for call.
Any ideas?
Thanks in advance.
BR,
Evgenia
Maybe you are looking for
-
Help with encapsulation and a specific case of design
Hello all. I have been playing with Java (my first real language and first OOP language) for a couple months now. Right now I am trying to write my first real application, but I want to design it right and I am smashing my head against the wall with
-
How to restrict Cut, Copy and Paste to a Local Clipboard.
I need to stop any information being copied from my application to other applications such as Word. Is there any way to make sure that all GUI components that support cut, copy and paste use a local clipboard so that I can then control which Flavor's
-
Installing new hard drive on iMac G5
I've read all the instructions and a lot of posts but have a few questions. I've never done this before so I want to be certain I don't screw something up. 1. If I install a new hard drive on my iMac do I have to put the original install discs in or
-
SAP-dropdown; source and content
Hi all! I have a question. I'm using PDK.NET2.0, MSSQL and portal 6.40 SP15 How can I set "DataValueField(.NET)" to a dropdown-control without showing it. I am using a stored procedure with two columns. Example: apple orange banana and these have the
-
Why when I click on a photo in iPhoto do I get a yield sign with an exclamation mark in it?
why when I click on a photo in iPhoto do I get a yield sign with an exclamation mark in it?