Problem in propagation
Hi everyone,
Thanks already for beeing reading this. :)
We have got this problem when trying to obtain the propataion inventory. We are working in WLP 10.3.0.0 and Oracle DB 10.2. The error is as it follows:
The propagation servlet returned a failure response: The [Download] operation is halting due to the following failure: The security policy for resource [PortalSystemDelegator] with capability [delegate_further_manage] is missing in LDAP. If you have reset LDAP or configured a new WLP domain to use a pre-existing WLP RDBMS, then you must reset your RDBMS.
The propagation servlet returned the following log information found in [C:\Users\FRANCI~1\AppData\Local\Temp\onlineDownload__D4_H15_M51_S39.log]:
INFO (Nov 4, 2011 3:53:53 PM CLST): Verbose logging has been disabled on the server.
INFO (Nov 4, 2011 3:53:53 PM CLST): The propagation servlet is starting the [Download] operation.
INFO (Nov 4, 2011 3:53:53 PM CLST): The modifier [allowMaintenanceModeDisabled] with a value of [true] will be used for this operation.
INFO (Nov 4, 2011 3:53:53 PM CLST): Validating that current user is in the Admin role...SUCCESS
INFO (Nov 4, 2011 3:53:53 PM CLST): Validating that Maintenance Mode is enabled...SUCCESS
WARNING (Nov 4, 2011 3:53:53 PM CLST): The temporary directory on the server used by propagation is [/bea/user_projects/domains/domain_mov_9/servers/AdminServer/tmp/_WL_user/PortalMovistarEAR/1ovfo2/public] with a length of [104] bytes. It is recommended that you shorten this path to avoid path length related failures. See the propagation documentation on how to specify the inventoryWorkingFolder context-param for the propagation servlet.
ERROR (Nov 4, 2011 3:53:55 PM CLST): Validating that LDAP and RDBMS security resources are in sync...FAILURE
ERROR (Nov 4, 2011 3:53:55 PM CLST): The security policy for resource [PortalSystemDelegator] with capability [delegate_further_manage] is missing in LDAP. If you have reset LDAP or configured a new WLP domain to use a pre-existing WLP RDBMS, then you must reset your RDBMS. Otherwise, insure all available patches have been applied to your installation.
ERROR (Nov 4, 2011 3:53:55 PM CLST): The [Download] operation is halting due to the following failure: The security policy for resource [PortalSystemDelegator] with capability [delegate_further_manage] is missing in LDAP. If you have reset LDAP or configured a new WLP domain to use a pre-existing WLP RDBMS, then you must reset your RDBMS. Otherwise, insure all available patches have been applied to your installation. The situation is that there is Portal working already (given). We have created this new domain in our development environment and pointed all datasources to a pre-existing database (in fact, this database is a loaded dump from production).
Basically, our conclusion is that the role is in the DB (obviously, for it came within the dump from the production's DB) and it's not in the LDAP.
ANY help will do. Really.
Thanks in advance,
Andres
Thank you very very much to both of you.
We followed the precedure you pointed out and it provided to be the solution to our troubles. We just didn't follow the last 2 steps for we didn't have any crusial data on the LDAP. We were really stucked on this and now we can go on.
Again, thank you very much.
Whenever you stop by Chile, we will invite you to party / BBQ / or whatever do to thank you for the help,
Best regards,
Andrés
Similar Messages
-
I had successfully configired streams in my environment.Initailly whatever i performed dml operations like insert its replicate to my target database .When delete one row from my source database ,not deleted from my target database hence then whatever i performed amy dml operations like insert ,update,delete not replication to my target database.Any advice it should be gratefule
At my source
============
SQL> select * from dept;
DEPTNO DNAME LOC EMPNO
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
50 xxx lon
60 yyy hyd
75 BT LON
80 BT1 Ban
At my target
SQL> select * from dept;
DEPTNO DNAME LOC EMPNO
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
50 xxx lon
60 yyy hyd
70 BT LON
80 BT1 BAN
MohanHi
Thanks for reply.I want any dml or ddl operation s performed either side it should propagate .Please let me where i made mistake
Instance Setup
In order to begin the following parameters should be set in the spfiles of participating databases:
ALTER SYSTEM SET JOB_QUEUE_PROCESSES=1;
ALTER SYSTEM SET AQ_TM_PROCESSES=1;
ALTER SYSTEM SET GLOBAL_NAMES=TRUE;
ALTER SYSTEM SET COMPATIBLE='9.2.0' SCOPE=SPFILE;
ALTER SYSTEM SET LOG_PARALLELISM=1 SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;
Stream Administrator Setup
=============================
Next we create a stream administrator, a stream queue table and a database link on the source database:
CONN sys/password@DBA1 AS SYSDBA
The below steps had done at source and target except db link
=============================================
CREATE USER strmadmin IDENTIFIED BY strmadminpw
DEFAULT TABLESPACE users QUOTA UNLIMITED ON users;
GRANT CONNECT, RESOURCE, SELECT_CATALOG_ROLE TO strmadmin;
GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
GRANT EXECUTE ON DBMS_FLASHBACK TO strmadmin;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
CONNECT strmadmin/strmadminpw@DBA1
EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
CREATE DATABASE LINK dba2 CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'DBA2';
GRANT ALL ON scott.dept TO strmadmin;
LogMinor Tablespace Setup
Next we create a new tablespace to hold the logminor tables on the source database:
CONN sys/password@DBA1 AS SYSDBA
CREATE TABLESPACE logmnr_ts DATAFILE '/u01/app/oracle/oradata/DBA1/logmnr01.dbf'
SIZE 25 M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnr_ts');
Supplemental Logging
CONN sys/password@DBA1 AS SYSDBA
ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP log_group_dept_pk (deptno) ALWAYS;
Configure the propagation process on DBA1:
========================================
CONNECT strmadmin/strmadminpw@DBA1
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'scott.dept',
streams_name => 'dba1_to_dba2',
source_queue_name => 'strmadmin.streams_queue',
destination_queue_name => 'strmadmin.streams_queue@dba2',
include_dml => true,
include_ddl => true,
source_database => 'smtp');
END;
Configure the capture process on DBA1:
=====================================
CONNECT strmadmin/strmadminpw@DBA1
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'scott.dept',
streams_type => 'capture',
streams_name => 'capture_simp',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true);
END;
Configure Instantiation SCN
exp USERID=scott/tiger@dba1 TABLES=DEPT FILE=D:\tab1.dmp
GRANTS=Y ROWS=Y LOG=exportTables.log OBJECT_CONSISTENT=Y INDEXES=Y
imp USERID=scott/tiger@dba2 FULL=Y CONSTRAINTS=Y FILE=d:\tab1.dmp
IGNORE=Y GRANTS=Y ROWS=Y COMMIT=Y LOG=importTables.log STREAMS_CONFIGURATION=Y STREAMS_INSTANTIATION=Y
Alternatively the instantiation SCN can be set using the DBMS_APPLY_ADM package:
CONNECT strmadmin/strmadminpw@dba1
DECLARE
v_scn NUMBER;
BEGIN
v_scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@DBA2(
source_object_name => 'scott.dept',
source_database_name => 'dba1',
instantiation_scn => v_scn);
END;
Configure the apply process on the destination database (DBA2):
CONNECT strmadmin/strmadminpw@DBA2
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'scott.dept',
streams_type => 'apply',
streams_name => 'apply_simp',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
source_database => 'smtp');
END;
Start the apply process on destination database (DBA2) and prevent errors stopping the process:
CONNECT strmadmin/strmadminpw@DBA2
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'apply_simp',
parameter => 'disable_on_error',
value => 'n');
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'apply_simp');
END;
Start the capture process on the source database (DBA1):
CONNECT strmadmin/strmadminpw@DBA1
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_simp');
END;
Regards
MOhan -
Hi
i am trying to propagate message from a queue in 8i database to a queue in 9i database.The problem is that i can see the message being enqueued in 8i database but the 9i queue is always empty.Is there any problem with propagation.How can i check if there are some errors during propagation?I've done it with 8.1.7.4 and 9.2.0.6 in both directions
with event scheduling (in ora8i i realized it by myself because it is a 9i feature)
and it is working now for half a year in production at customers site!
so, what have you tried in detail? -
Oracle Advanced Queuing - Propagation problem - 11g
Hi,
I have a problem when propagation messages between queues. When the message is propagated, it stays on the source queue with READY state.
I have created two queues on 11g with a propagation rule that any message from queue A are sent to queue B. My problem is that the message from the source queue stays in the source queue even after propagation, which isn't what I was expecting. The problem doesn't occur if the queues are on a different database. This problem only happens if the queues are on the same database.
the script I use is this:
For USERB (which has the destination queue)
create type EVENT_MESSAGE as object (
eventsource VARCHAR2(30),
eventname VARCHAR2(255),
eventid NUMBER(19,0),
message CLOB
DECLARE
an_agent sys.aq$_agent;
BEGIN
-- create the publish/subscribe queue table
dbms_aqadm.create_queue_table(
queue_table => 'DESTINATION_QUEUE_TABLE',
queue_payload_type=>'EVENT_MESSAGE',
sort_list => 'ENQ_TIME',
message_grouping => DBMS_AQADM.NONE,
multiple_consumers=>true
-- create the queue
dbms_aqadm.create_queue(
queue_name => 'DESTINATION',
queue_table => 'DESTINATION_QUEUE_TABLE',
queue_type => DBMS_AQADM.NORMAL_QUEUE,
max_retries => 5
dbms_aqadm.create_aq_agent(agent_name =>'DEQUEUE_AGENT');
an_agent := sys.aq$_agent('DEQUEUE_AGENT', null, null);
dbms_aqadm.enable_db_access(
agent_name => 'DEQUEUE_AGENT',
db_username => 'USERB'
dbms_aqadm.add_subscriber(
queue_name => 'DESTINATION',
subscriber => an_agent,
queue_to_queue => FALSE,
delivery_mode => DBMS_AQADM.PERSISTENT
-- start the queues
dbms_aqadm.start_queue('DESTINATION');
END;
For USERA
create type EVENT_MESSAGE as object (
eventsource VARCHAR2(30),
eventname VARCHAR2(255),
eventid NUMBER(19,0),
message CLOB
BEGIN
-- create the publish/subscribe queue table
dbms_aqadm.create_queue_table(
queue_table => 'SOURCE_QUEUE_TABLE',
queue_payload_type=>'EVENT_MESSAGE',
sort_list => 'ENQ_TIME',
message_grouping => DBMS_AQADM.NONE,
multiple_consumers=>true
-- create the queue
dbms_aqadm.create_queue(
queue_name => 'SOURCE',
queue_table => 'SOURCE_QUEUE_TABLE',
queue_type => DBMS_AQADM.NORMAL_QUEUE,
max_retries => 5
-- start the queues
dbms_aqadm.start_queue('SOURCE');
-- create the propagation
dbms_aqadm.add_subscriber(queue_name => 'SOURCE',
subscriber => sys.aq$_agent('DEQUEUE_AGENT','USERB.DESTINATION',null),
queue_to_queue => true);
dbms_aqadm.schedule_propagation(queue_name => 'SOURCE',
start_time => sysdate,
latency => 25,
destination_queue => 'USERB.DESTINATION');
END;
When I enqueue a message to the source on USERA with this:
declare
rc binary_integer;
nq_opt dbms_aq.enqueue_options_t;
nq_pro dbms_aq.message_properties_t;
datas EVENT_MESSAGE;
msgid raw(16);
begin
nq_pro.expiration := dbms_aq.never;
nq_pro.sender_id := sys.aq$_agent('ENQUEUE_AGENT', null, null);
datas := AGEAS_EVENT_MESSAGE('message','eventname',1,null);
dbms_aq.enqueue('SOURCE',nq_opt,nq_pro,datas,msgid);
end;
The message is propagated to the destination queue, no problem, but the message state on the source queue is kept as ready. I would have expected it to be marked as processed and disappear from the queue table.
When I look at the AQ$_SOURCE_QUEUE_TABLE_S the I see these records:
QUEUE_NAME NAME ADDRESS PROTOCOL SUBSCRIBER TYPE
SOURCE (null) "USERB"."DESTINATION"@AQ$_LOCAL 0 1736
SOURCE DEQUEUE_AGENT "USERB"."DESTINATION" 0 577
Can anyone help?I was talking about following oracle documentations:
Oracle Database 11g: Advanced Queuing (Technical Whitepaper)
Streams Advanced Queuing: Best Practices (Technical Whitepaper)
Oracle Streams Advanced Queuing and Real Application Clusters: Scalability and Performance Guidelines (Technical Whitepaper)
They are available at.. http://www.oracle.com/technetwork/database/features/data-integration/default-159085.html -
Propagation error when trying to download inventory from server
Hi there,
Has anyone seen the following error when trying to download an inventory from the server.:
Buildfile: C:\bea10.3\user_projects\workspaces\RST\RSTPropagation\21102009\propbuild.xml
import:
BUILD FAILED
C:\bea10.3\user_projects\workspaces\RST\RSTPropagation\21102009\propbuild.xml:39: The propagation servlet returned a failure response: The [Download] operation is halting due to the following failure: null
Additional Information:
The propagation servlet returned the following log information found in [C:\DOCUME~1\myuser\LOCALS~1\Temp\onlineDownload__D21_H11_M8_S11.log]:
INFO (Oct 21, 2009 11:08:11 AM SAST): Verbose logging has been disabled on the server.
INFO (Oct 21, 2009 11:08:11 AM SAST): The propagation servlet is starting the [Download] operation.
INFO (Oct 21, 2009 11:08:11 AM SAST): The modifier [allowMaintenanceModeDisabled] with a value of [true] will be used for this operation.
INFO (Oct 21, 2009 11:08:11 AM SAST): Validating that current user is in the Admin role...SUCCESS
ERROR (Oct 21, 2009 11:08:11 AM SAST): Validating that Maintenance Mode is enabled...FAILURE
ERROR (Oct 21, 2009 11:08:11 AM SAST): Maintenance Mode has not been enabled on the server. With Maintenance Mode disabled it is possible for users to modify the application. This may cause problems for propagation.
WARNING (Oct 21, 2009 11:08:11 AM SAST): Because the modifier [allowMaintenanceModeDisabled] was enabled this validation failure will be ignored and the operation will proceed. However, users will still be able to make modifications to the application, which could lead to missing data and unexpected propagation errors.
WARNING (Oct 21, 2009 11:08:11 AM SAST): The temporary directory on the server used by propagation is [portal/bea10.3/user_projects/domains/RSTDomain/servers/wl_nstf/tmp/_WL_user/RSTEar/7v9j6d/public] with a length of [99] bytes. It is recommended that you shorten this path to avoid path length related failures. See the propagation documentation on how to specify the inventoryWorkingFolder context-param for the propagation servlet.
INFO (Oct 21, 2009 11:08:19 AM SAST): Validating that LDAP and RDBMS security resources are in sync...SUCCESS
INFO (Oct 21, 2009 11:08:19 AM SAST): Writing the inventory file to the servers file system at [{0}].
ERROR (Oct 21, 2009 11:08:23 AM SAST): The [Download] operation is halting due to the following failure: null
Total time: 14 seconds
Please let me know if you have any ideas because "The [Download] operation is halting due to the following failure: null" means nothing to me.
Please note changing the maintenace mode makes no difference.Please enable Verbose Logging on the propagation servlet
http://download.oracle.com/docs/cd/E13155_01/wlp/docs103/prodOps/propToolAdvanced.html#wp1071690
and check the logs on the server, they might give a clue -
Hi all, I have a problem getting propagation (with transformation) to work.
First things first:
- Oracle 10.2.0.3 EE
- HP-UX 11.11
- job_queue_processes=2
- aq_tm_processes NOT explicitly set
The goal of my setup:
- have a message of one type come in
- transform it to another type and
- propagate it to another queue
To make tests easier, everything runs within one Schema with aq_administrator_role granted. I tried that (code below), but the message will not be propagated to the second queue. The message remains in the first queue in status 0.
Am I missing something?
Best regards,
Uwe
Here's the setup:
-- Outgoing Queue with user defined MyMsgType
CREATE TYPE mymsgtype AS OBJECT(subject VARCHAR2(100));
BEGIN
SYS.DBMS_AQADM.CREATE_QUEUE_TABLE
QUEUE_TABLE => 'AQTEST.VCQ_tab'
,QUEUE_PAYLOAD_TYPE => 'mymsgtype'
,COMPATIBLE => '10.0'
,STORAGE_CLAUSE => '
TABLESPACE USERS'
,SORT_LIST => ''
,MULTIPLE_CONSUMERS => FALSE
,MESSAGE_GROUPING => 0
,SECURE => FALSE
End;
BEGIN
SYS.DBMS_AQADM.CREATE_QUEUE
QUEUE_NAME => 'AQTEST.VCQueue'
,QUEUE_TABLE => 'AQTEST.VCQ_tab'
,QUEUE_TYPE => SYS.DBMS_AQADM.NORMAL_QUEUE
,MAX_RETRIES => 0
,RETRY_DELAY => 0
,RETENTION_TIME => -1
END;
BEGIN
SYS.DBMS_AQADM.START_QUEUE
QUEUE_NAME => 'AQTEST.VCQueue'
,ENQUEUE => TRUE
,DEQUEUE => TRUE
END;
-- Incoming Queue with user defined MsgType
CREATE TYPE msgtype AS OBJECT(subject VARCHAR2(100),
text BLOB);
BEGIN
SYS.DBMS_AQADM.CREATE_QUEUE_TABLE
QUEUE_TABLE => 'AQTEST.msgtypeQ_tab'
,QUEUE_PAYLOAD_TYPE => 'aqtest.msgtype'
,COMPATIBLE => '10.0'
,STORAGE_CLAUSE => '
TABLESPACE USERS'
,SORT_LIST => ''
,MULTIPLE_CONSUMERS => TRUE
,MESSAGE_GROUPING => 0
,SECURE => FALSE
End;
BEGIN
SYS.DBMS_AQADM.CREATE_QUEUE
QUEUE_NAME => 'AQTEST.msgtypeQueue'
,QUEUE_TABLE => 'AQTEST.msgtypeQ_tab'
,QUEUE_TYPE => SYS.DBMS_AQADM.NORMAL_QUEUE
,MAX_RETRIES => 0
,RETRY_DELAY => 0
,RETENTION_TIME => 0
END;
BEGIN
SYS.DBMS_AQADM.START_QUEUE
QUEUE_NAME => 'AQTEST.msgtypeQueue'
,ENQUEUE => TRUE
,DEQUEUE => TRUE
END;
--Transformation
CREATE OR REPLACE FUNCTION my2msg_t(in_mymsg msgtype)
RETURN mymsgtype AS
new_msg mymsgtype;
BEGIN
new_msg := mymsgtype(in_mymsg.subject);
RETURN new_msg;
END my2msg_t;
BEGIN
DBMS_TRANSFORM.CREATE_TRANSFORMATION(
schema => 'AQTEST',
name => 'MY2MSG_XFM',
from_schema => 'AQTEST',
from_type => 'MSGTYPE',
to_schema => 'AQTEST',
to_type => 'MYMSGTYPE',
transformation => 'my2msg_t(source.user_data)');
END;
--Propagation
/* Add subscriber for propagation with transformation. */
-- setzt Multiple Consumer Queue voraus.
DECLARE
subscriber sys.aq$_agent;
BEGIN
subscriber := sys.aq$_agent('msg2mymsg','AQTEST.VCQueue',null);
DBMS_AQADM.ADD_SUBSCRIBER(
queue_name => 'AQTEST.msgtypeQueue',
subscriber => subscriber,
transformation => 'AQTEST.MY2MSG_XFM');
END;
BEGIN
dbms_aqadm.schedule_propagation(
queue_name => 'AQTEST.msgtypeQueue',
latency => 0);
END;
Tests
-- Session A (exits nicely):
set serveroutput on size 9999
DECLARE
v_msg msgtype;
v_msgid RAW(16);
v_options DBMS_AQ.enqueue_options_t;
v_props DBMS_AQ.message_properties_t;
v_recipient DBMS_AQ.aq$_recipient_list_t;
BEGIN
v_msg := msgtype( 'Test_1', empty_blob );
v_recipient(1) := SYS.aq$_agent ('dummy', null, null);
v_props.recipient_list := v_recipient;
DBMS_AQ.ENQUEUE( 'msgtypeQueue', v_options, v_props, v_msg, v_msgid );
COMMIT;
dbms_output.put_line( v_msgid );
END;
-- Session B (waits forever):
set serveroutput on size 9999
DECLARE
v_msg mymsgtype;
v_msgid RAW(16);
v_options DBMS_AQ.dequeue_options_t;
v_props DBMS_AQ.message_properties_t;
v_recipient DBMS_AQ.aq$_recipient_list_t;
BEGIN
v_options.consumer_name := 'dummy';
DBMS_AQ.DEQUEUE('VCQueue', v_options, v_props, v_msg, v_msgid);
COMMIT;
dbms_output.put_line( v_msgid );
dbms_output.put_line( v_msg.subject );
END;
/Thanks for your answer, Daniel,
Dequeue on the "msgtypequeue" works fine. By the way, the message didn't expire, SQL output was just hardly readable - this should be more readable now:
SQL> select qs.name, v.*
2 from v$aq v, all_queues qs
3 where waiting+ready+expired > 0
4 and v.qid = qs.qid;
NAME QID WAITING READY EXPIRED TOTAL_WAIT AVERAGE_WAIT
MSGTYPEQUEUE 44946 0 1 0 975 975I also added the schema name for the transformation function:
transformation => 'AQTEST.my2msg_t(source.user_data)');
...That didn't help either. :-(
Best regards,
Uwe -
Hi,
I have a web application that uses servlets, session beans and entity beans,
each beloging to a particular layer of the app. The web application handles
the validation. However I am experiencing problems in propagating the user
id through the layers, since every method of the servlets and the session
beans knows exactly the identity of the user, BUT when those resources call
a method on the entity bean, the identity is lost (and defaults to guest),
i.e. there is no identity propagation between the session and the entity
bean layers.
I am using weblogic server 5.1 SP8
Any help would be highly appreciated
Eduardo CorreiaHi Deepak, thanks for your response. I seem to have the same opinion.
1) Servlet on 'domain 1' calling EJB on 'domain 2'
2) EJB on 'domain 1' calling EJB on 'domain 2'
5) MDB (configure to run as a user) on 'domain 2' trying to access message on secured JMS on 'domain 1'
The identity propagation in above scenarios is automatic within same domain. However if 2 domains are involved, trust needs to enabled.
3) Servlet on 'domain 1' calling Web Service(WSRM) on 'domain 2'
4) EJB on 'domain 1' calling Web Service(WSRM) on 'domain 2'
The identity propagation in above scenario requires setting of credential mapper on source and credential identity assertor on destination. Can somebody confirm this?
What I am also trying to understand is if my 2 applications(EAR) reside on same domain, can an EJB belonging to one application call another EJB from the second application without any special settings. -
How to propagate JSP-Container-Login and EJB-Lookup?
I think I have a very common problem which should be solved by multitudes of developers, but still I can't find sufficient info how to solve it. Here is my problem:
- My App consists of 2 different EARs, one Web-EAR and one EJB-EAR
- The Webapp uses digest authentication through web.xml security-constraint
- Currently both EARs are using xml-based security-provider (jazn)
- Any user has to log in to the webapp (this works)
- the webapp delegates business-logic to EJB3 stateless SessionBeans
- as long as I hardcode principal and password on the creation of the InitialContext, the authentication on the EJB-container works also fine
- what I need is a propagation of the logged in webapp user to the EJB-container
- I switched on subject-propagation as described in OC4J security guide chapter 18
The problem: The propagation doesnt seem to work as expected. I still have to use (hardcoded) user/password credentials upon InitalContext-creation.
- How can I reassure that subjectpropagation is switched in?
- How do I have to instantiate the InitialContext in order to use propagation?
This is what I do now:
Properties p = new Properties();
p.put(Context.INITIAL_CONTEXT_FACTORY, "oracle.j2ee.rmi.RMIInitialContextFactory");
p.put(Context.PROVIDER_URL, ormi://localhost:23791/EJB-EAR);
p.put(Context.SECURITY_PRINCIPAL, "myuser");
p.put(Context.SECURITY_CREDENTIALS, "mypassword");
Context = new InitialContext(p);
When loggin into the web-container, the password of the logged-in user is not accessible anymore. Because of that I thought automatic subject-propagation shouold solve my problem. Did I misunderstood the concept of subjectpropagation (using ORMI)So far I have achived the following, but my problem is not really solved:
As long as I use EJBs within the same EAR of my webapp everthing is fine.
No need to proved credentials with the instantiation of the InitialContext. Also subject-propagation is not needed.
At the moment I split ejb and webapp into separate EAR on the same OC4J-instance, I have to use the RMIInitialcontextFactory, to get acces to the EJBs at all. Subject-Propagation is obviusly on, because without the call of Subject.getSubject(AccessController.getContext) delivers null!
So the remaining question is, how do I initiate the subject-propagation over RMI? Is there a special name under which I have to put my subject? Do I have to execute the actual ejb-method-call by subject.doAs.. and thus have to provide a wrapper for my EJB as a ProtectedObject?
Anybody? -
Volsnap Event ID: 25 and Loss of Previous Backups
I have a Windows Small Business Server 2008 machine being backed up using the Windows Server Backup utility. The backups are being made to two external USB attached 1TB drives. Each drive is rotated out weekly, meaning that one drive is attached one week, then the other drive is swapped in the following week, ad infinitum for the best part of a year. Two backups were being made each day to the attached drive, one at 12pm and one at 9pm.
About two weeks ago it appears that one of the two drives reached capacity and began shedding the oldest backups, as confirmed by the VOLSNAP Event ID: 33 entries in the System Event log. On the second day, however, ALL shadow copies on the attached backup driver (volume) were deleted, as confirmed by a single VOLSNAP Event ID: 25. (See below for both Event ID message contents.) Unfortuneately, neither events were noted until the drive was swapped out with the other drive and some how the problem was propagated to the second drive, resulting in the deletion of all shadow copies on the second drive, too.
The end result... BOTH drives lost ALL previous backups leaving only the most recent full backup and no clue as to why it happened or how to prevent it from happening again. Virtually no older backup file versions could be recovered.
I was under the impression that, by design, the WSB utility would only start deleting the oldest backups (on a full drive) in order to make room for the newer backups. Granted, had I been keeping attention, I would have actually swapped out the full drives before the actually got full, so that NO data was deleted.
If I'm to continue using the Windows Server Backup utility, I need to know what happened and how to absolutely prevent it from happening again. I never expected to loose BOTH backups at the same time, and due to budget restraints, two rotating backup drives seemed a logical solution.
Here is text from the two VOLSNAP Event ID messages found in the System Log:
Event ID: 25
Source: volsnap
Level: Error
Description: The shadow copies of volume \\?\Volume{93896cfc-d904-11dd-8cb5-001ec9ef572e} were deleted because the shadow copy storage could not grow in time. Consider reducing the IO load on the system or choose a shadow copy storage volume that is not being shadow copied.
Event ID: 33
Source: volsnap
Level: Information
Description: The oldest shadow copy of volume \\?\Volume{93896cfc-d904-11dd-8cb5-001ec9ef572e} was deleted to keep disk space usage for shadow copies of volume \\?\Volume{93896cfc-d904-11dd-8cb5-001ec9ef572e} below the user defined limit.
NOTE: Though I'm not sure if this has any bearing on the situation, the server is configured to make two snapshots a day of two volumes on the server, however, when I looked at the Shadow Copy settings I couldn't quite be sure if the VSS wasn't somehow also making snapshots on the two external drives, even though I haven't specified VSS to do so. None show up on the two replacement drives I'm now using, so I'd say not.
Information on how to prevent this from happening again would greatly be appreciated. (I'll be making sure no backup drive gets even close to being full from now on, that's for sure!)
Thanks,
Joseph.I have a Windows Small Business Server 2008 machine being backed up using the Windows Server Backup utility. The backups are being made to two external USB attached 1TB drives. Each drive is rotated out weekly, meaning that one drive
is attached one week, then the other drive is swapped in the following week, ad infinitum for the best part of a year. Two backups were being made each day to the attached drive, one at 12pm and one at 9pm.
About two weeks ago it appears that one of the two drives reached capacity and began shedding the oldest backups, as confirmed by the VOLSNAP Event ID: 33 entries in the System Event log. On the second day, however, ALL shadow copies on the attached
backup driver (volume) were deleted, as confirmed by a single VOLSNAP Event ID: 25. (See below for both Event ID message contents.) Unfortuneately, neither events were noted until the drive was swapped out with the other drive and
some how the problem was propagated to the second drive, resulting in the deletion of all shadow copies on the second drive, too.
The end result... BOTH drives lost ALL previous backups leaving only the most recent full backup and no clue as to why it happened or how to prevent it from happening again. Virtually no older backup file versions could be recovered.
I was under the impression that, by design, the WSB utility would only start deleting the oldest backups (on a full drive) in order to make room for the newer backups. Granted, had I been keeping attention, I would have actually swapped out the full drives
before the actually got full, so that NO data was deleted.
If I'm to continue using the Windows Server Backup utility, I need to know what happened and how to absolutely prevent it from happening again. I never expected to loose BOTH backups at the same time, and due to budget restraints, two rotating backup
drives seemed a logical solution.
Here is text from the two VOLSNAP Event ID messages found in the System Log:
Event ID: 25
Source: volsnap
Level: Error
Description: The shadow copies of volume \\?\Volume{93896cfc-d904-11dd-8cb5-001ec9ef572e} were deleted because the shadow copy storage could not grow in time. Consider reducing the IO load on the system or choose a shadow copy storage volume
that is not being shadow copied.
Event ID: 33
Source: volsnap
Level: Information
Description: The oldest shadow copy of volume \\?\Volume{93896cfc-d904-11dd-8cb5-001ec9ef572e} was deleted to keep disk space usage for shadow copies of volume
\\?\Volume{93896cfc-d904-11dd-8cb5-001ec9ef572e} below the user defined limit.
NOTE: Though I'm not sure if this has any bearing on the situation, the server is configured to make two snapshots a day of two volumes on the server, however, when I looked at the Shadow Copy settings I couldn't quite be sure if the VSS wasn't somehow
also making snapshots on the two external drives, even though I haven't specified VSS to do so. None show up on the two replacement drives I'm now using, so I'd say not.
Information on how to prevent this from happening again would greatly be appreciated. (I'll be making sure no backup drive gets even close to being full from now on, that's for sure!)
Thanks,
Joseph.
This question was originally asked over 2 years ago. Is there any answer that Microsoft would be willing to provide for those of us that are struggling to find the answers? -
I am setting up streams between Enterprise 11.2.0.3 and Standard 11.2.0.3, using the document Oracle 2Day + Data Replication.
Oracle Enterprise Manager shows the Propagation has some errors, and below is the alert log
Sat Dec 15 00:05:18 2012
Propagation Schedule for (STREAMSADMIN.CAPTURE_QUEUE, "STREAMSADMIN"."APPLY_QUEUE"@ORCL2.JETCODELIVERY.COM) encountered following error:
ORA-22636: Message 22636 not found; product=RDBMS; facility=ORA
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_10312.trc:
ORA-22636: Message 22636 not found; product=RDBMS; facility=ORA
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_10312.trc:
ORA-12012: error on auto execute of job "SYS"."AQ_JOB$_1531"
ORA-22636: Message 22636 not found; product=RDBMS; facility=ORA
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 8710
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 9114
Sat Dec 15 00:06:18 2012
What is the ORA-22636 error ?
Anyone experiences this problem ?
Thanks,
Vu
I set up using the synchronous capture, which is available for Standard Ed.
Edited by: vuatsc on Dec 15, 2012 4:55 AMI found another thread that has similar problem
Streams Propagation Error
so I tried the same setup between two servers 11.2.0.1 ( instead of 11.2.0.3 ) and streams works fine. Data is replicated as expected.
So perhaps this is particular to 11.2.0.3 ?
Vu -
Accessing Custom Security Realm and NotOwnerException.
I have installed the RDBMS example security realm, which appears to work fine. However when I attempt to access this realm from a Servlet via Realm.getRealm("name") I get an NotOwnerException being thrown.
Ideas ?
regards,
Jeff.We did something similar in a past project, and it turned out to be more of a mess than
it was worth it (not only the "chicken-egg" dilemma with system, guest, administrator
users, etc., but also with various lookup and threading issues.) We ended up ripping
out the code and writing a new one which does not use an EJB.
EJB are supposed to be written in terms of container services (which security being one
of the services the container provides) but in this scenario you'd be writing one of the
container services in terms of EJBs, so it "breaks" the proper layering.
In our case, we wanted to "encapsulate" our security code from Weblogic's propreitary
realm mechanism, at the end we still achieved without having to create a session bean
(sometimes regular Java classes work just fine) :-)
regards,
-Ade
"watscheck" <[email protected]> wrote in message news:[email protected]..
>
Hi,
i want to use a sessonEJB as my security store for the custom security realm in
weblogic server 6.1.
Has anyone experience with that?
First i have to pass all filerealm users through my custom realm (csr) because
it is not possible to authenticate the system and guest users before the sessionEJB
itself is loaded.
OK, but my problem is the authentication of the csr at the sessionEJB, which is
itself secured by method-permission in it's assemblydesciptor. So i have to get
an initialcontext with an authorized user for the sessionEJB an invoke all protected
methods with this principal.
But Bea WLS has a problem with propagating this user back to the actual application.
Is there a way that the application (web-app and ejbs) is not affected by the
authentification of the csr at the sessionEJB (security store)?
And is it right that the new initialcontext in the csr always overrides the bea
context and with that the servlet request of the web-app?
thanks in advance
watscheck -
In which database table, Content selectors will be stored?
The data in the datasync tables is in XML form , so you may not be looking at the correct place.(Im not sure whether in development modethe tables get loaded , i believe they do , but I havent verified , in dev mode I believe the portal uses the file system)
if this is a new content selector , propagation will work, try it out. You can download inventory from local and you should be able to see the content selector...if you are modifying an existing content selector you may have problems with propagation , if so you have no way other than cleaning the tables.
regards
deepak -
Hi all,
I have a client who is running OS X Server 10.4.9 and has an odd problem with ACLs not fully propagating. We enabled ACLs on the server, rebooted and went about setting ACL permissions on the root of a share point. When propagating, it stopped at about halfway though it seems; meaning that the ACLs show up fine in the "Access" tab and are working on half the folders, but not the other half. There doesn't seem to be anything particularly different about the folder it stops at either.
Hope someone can shed some light on this, thanks!Depending on what permissions were set before, we used to have problems with propagating NON-ACL (POSIX) permissions (Panther, Tiger) from WGM.
This was probably because the user/admin that was logged in in WGM didn't have the permissions to alter all files/folders from their current settings.
I can't say I have encountered the same problem with ACLs.
If the volume is set to use ACLs (inherited permissions dimmed) I guess you should perhaps first check the volume with a disk tool. You then might have to use CLI tools: chown, chgrp, chmod to set ACLs. I haven't used those myself for setting ACLs since it seems a pain compared to using the WGM. For setting POSIX permission the CLI tools are relatively easy.
Did yoy try changing only ACLs or ACLs and POSIX at the same time?
Changing only ACLs should suffice (I don't bother with POSIX if not neccessary) for user rights as they are used before POSIX. -
Data propagation problems w/ NIS+ to LDAP migration..
Hello All,
I'm running in to an issue performing an NIS+ to LDAP migration with Solaris 9.
It all happens like this: NIS+ successfully populates the directory through the 'initialUpdateAction=to_ldap' option-- afterwards, no updates made directly to LDAP are ever pushed back into NIS+.
I'm of the understanding (which might be incorrect) that after performing the initial update, NIS+ should simply act as a cache to the data stored in LDAP. Do I need to perform an 'initialUpdateAction=from_ldap' after populating LDAP to force the direction of the data propagation to change?
I'm experienced with LDAP, so I'm comfortable everything is all right on that side, however, I'm not so sure about NIS+. Anyone out there who has gone through this migration who'd be willing to offer some assistance or advice would be greatly appreciated.
Many thanks in advance..
..Sean.Well, you neglected to outline exactly how you accomplished your migration.
Starting with Tiger Server using NetInfo as a standalone server, we created an Open Directory Master, as described in Apple's Open Directory Guide. By the time we'd finished that, we had an OD admin. From there, we did as I previously described -- exported with WGM from NetInfo, imported with WGM into LDAP, deleted with WGM from NetInfo.
See http://support.apple.com/kb/TA23888?viewlocale=en_US
This seems to be an article on how to re-create a password that's been lost. That's not really what we need, though. The OD admin account we created works fine for other services, just not for WGM. And other admin users we created work fine for other services, but not for WGM. The problem is that although admin users can log into many services, they can't log into WGM -- only root can. -
Urgent--Message Propagation problem.
Hi all,
I am trying out the following stuff:-
1)Creating a QueueTable.
2)Setting its property to setMultipleCustomer(true)
3)Create a Queue using AQ API.
then calling the followin methods.
4) queue.startEnqueue();
5) queue.schedulePropagation(null, null, null, null, null);
NOTE:- I already have a destination queue to which message should propagate in the same database. Thats why i have given all null parameters in schedule propagation.
In client code I am using JMS API to connect to the Queues to put message.
But when Iam trying to Create the queue I get Resource Not Found Exception.
But when I use in the above case setMultipleConsume(false) the code doesnt give any exception, though message propagation is not done. But my intension is to use the message propagation thru schedulePropagation method. In order to use the message propagation feature I have to setMultipleCunsumer(true). and there by ending up in this problem.
Question 1)Can anyone let me know what are the steps to do a message propagation thru Queues(Not Topic). Step by step.
Question 2) Can anyone tell me how to and what to put in the dblink in case if I want by queue to propagate to a queue on a remote db server.
NOTE :- Im using ORACLE AQ APIs to create Queues. and using JMS Client to connect to the queue.
Best Regards,
Thanks in AdvanceSessions require session cookies.
If the user has cookies off then the session won't be retained by the server automatically.
The solution to this is to use URL rewriting.
There is a method: HTTPResponse.encodeURL( String url )
It automatically adds the JSESSIONID used to track the session onto the url if the browser is not accepting cookies.
You need to do this on every link in the website so that session is maintained.
Cheers,
evnafets
Maybe you are looking for
-
Unexpected behaviour upon pressing the 'Enter' key on dialog screen
Hi. I have two dialog screens that exhibit unexpected behaviour when i pressed the 'Enter' key. Screen 1: When I pressed the 'Enter' key, the focus shifted to another input field and the content of the previous input field is cleared. The thing is, I
-
Problem with Maya Mental Ray with new 8 core
I am having difficulties with my new mac pro 2009. Mental Ray for Maya is limited to 8 threads (cores normally) but with hyperthreading these 8 threads don't get distributed to each of the 8 cores and instead get split up among the 16 virtual cores t
-
ADE stuck on cover page and does not go to any page
ADE stuck on cover page and does not go to any page
-
Having trouble getting "Set action to take when logon hours expire" to work - Windows Server 2012
I have a Windows Server 2012 server that allows remote desktop users (sessions are hosted on the server itself). I'm trying to enforce logon hours for these remote desktop users. I have specified logon hours for a user and confirmed that they work--t
-
Looking for tool - for manage Oracle database
hi i looking for tool that i can make tables, run querys, make backup ..... on Oracle 11.2.0 64bit i'am looking for any graphical tool (like sql-server tool) thanks in advance