DataGuard... is it so complex ??
I'm not a DBA, I'm a system administrator that are working on a DBA project to implement DataGuard in our cie.
I like to understand what DataGuard will do. The DBA asked for tons of equipment, labs, devs, and an exact replication of my production environment to implement a solution. He also told me that he has to install OEM 11g, RMAN 11g. My prod BD is running on 10g.
My first question. Is dataguard is very hard to management without OEM? Do we have to create and develop tons of custom scripts for that?
The replication will be very simple, async, only applying redo log on a database on the DR site.
Your input and experience are much appreciated
Thanks
Is dataguard is very hard to management without OEM?
No, I have 10 in Production without any OEM. In some ways its better because its always better to know how to it from command line.
Do we have to create and develop tons of custom scripts for that?
No, but a few are nice. For example I e-mail the spool for these to me daily
PROMPT
PROMPT Checking last sequence in v$archived_log
PROMPT
clear screen
set linesize 100
column STANDBY format a20
column applied format a10
SELECT name as STANDBY, SEQUENCE#, applied, completion_time from v$archived_log WHERE DEST_ID = 2 AND NEXT_TIME > SYSDATE -1;
prompt
prompt----------------Last log on Primary--------------------------------------|
prompt
select max(sequence#) from v$archived_log;
PROMPT
PROMPT Checking free space In Flash Recovery Area
PROMPT
column FILE_TYPE format a20
select * from v$flash_recovery_area_usage;I tested all my Oracle 10 and 11 Data Guard using VMWARE on 1GB Linux servers if that helps. The nice thing about these is you can shut them down and copy them, so if you trash them all you have to do is copy them back.
I have a simple setup here:
http://www.visi.com/~mseberg/data_guard_on_oracle_11_step_by_step.html
Also a very good book on the subject:
http://tinyurl.com/DGHandbook
Worth every penny and Larry is a great guy.
Edited by: mseberg on Feb 4, 2011 2:18 PM
Similar Messages
-
Administrating Data Guard without complexity of Grid Control. Possible?
I wonder if someone can shed some wisdom about implementing and administrating Data Guard without the complexity of Grid Control. Don't get me wrong, I love the Data Guard feature provided by Grid Control, but installing Grid Control just for the sake of administrating Data Guard sounds a bit overkilling. Not to mention that I still have hard time getting Grid Control properly installed on a Windows Server 2003 box (keeps getting 503 Service Unavailable and the Servlet error).
I was told by a friend that Oracle 9 has something called EMCA (Control Assistant) that allows you to administrate Data Guide. Searching for any file containing the phrase "emca" under the Oracle directory ("c:\Oracle\product\10.2.0\db_1\BIN"), I found emca.bat and some related files. Does it mean the EMCA is actually existing in Oracle 10.2G (for Microsoft Windows Server)?
Any comment? Feeling clueless right now. :-I ....
DeecayI have set up Dataguard 9iR2 on Linux SLES8 and use Data Guard Broker to manage switchover and failover operations. It comes with the database and is command-line based.
The documentation walks you through the setup phases quite nicely.
http://www.oracle.com/pls/db92/db92.to_toc?pathname=server.920%2Fa96629%2Ftoc.htm&remark=docindex
I would suggest a read of some of the documentation on metalink surrounding dataguard and the broker before attempting to use either ;) -
How to measure Availability Complexity Operation & Maintenance Costs etc
Dear All
We have multiple configurations for attaining High Availability, Maximum Availability with and / or across data centers for databases. Some employ Active - Active Clustering Solution while Some use Active - Passive Solution. Some employ only OS clusters like Sun Clusters,Veritas clusters. Often as I noticed, a combination of OS cluster say Veritas Cluster as well as Oracle Cluster (Clusterware + RAC) in their database solutions.
In such situations how does one come up with Quantitative analysis of each methodology? What are the inputs one would need and how does these calculations come up? While searching on internet, I have seen at times ACTIVE - ACTIVE Database clusters using VCFS and Oracle Clusters RAC say the availability is some 99.xx% while other approaches are accordingly rated. How does one come up to such quantitative decisions? Could you please advice.
I was asked to come up with such numbers in terms of below mentioned items for a few approaches I suggested.
Availability of the Database
Complexity of Architecture and understanding.
Set up costs.
Operational Costs.
Cost of Maintenance
For your understanding I've suggested below approaches to attain High availability with Production Data Centre.
Approach - I : I will have Active - Passive Database running on node a and b respectively. DB1 will be active on node a and DB2 will be active on node b. In the event of failure they will fail over to the alternative.
Approach - I : I will have Active - Passive Database running on node a and b respectively. DB1 and DB2 will be active on node a and node b will always running waiting for a failover event. In the event of failure they will fail over to node b.
Approach - III : I will have DB1 and DB2 active on Node a. We could configure Dataguard (Logical or a physical standby) on node b in the production data centre.
Approach IV : I will have a two node cluster running RAC and will have ACTIVE- ACTIVE database DB1 and DB2 running on them.
Personally I like the Approach IV. However, for business reasons, we need to make a decision based on quantitative analysis done.
If one of you have already done one such excercise, could you please share yoru experiences here?
Many Many thanks for all your guidence in this direction.
Regards!
SaratSarat Chandra C wrote:
Dear All
Approach - I : I will have Active - Passive Database running on node a and b respectively. DB1 will be active on node a and DB2 will be active on node b. In the event of failure they will fail over to the alternative.
Non Availability of the Database: Time taken to detect failure+ OS failover + starting of the database
Complexity of Architecture and understanding: From oracle side there is no complexity as hardware cluster takes care about this
Set up costs: If management has already decided to have one server for each database, then no extra cost except the cost of license of the hardware cluster. Else cost of extra server.
Operational Costs: (None) (Excluding the dba/sa task to verify the failover)
Cost of Maintenance: (None) (Excluding the dba/sa task to verify the failover)
Approach - I : I will have Active - Passive Database running on node a and b respectively. DB1 and DB2 will be active on node a and node b will always running waiting for a failover event. In the event of failure they will fail over to node b.
Non Availability of the Database: Time taken to detect failure+failover+starting of the database*2 (Since both the database needed to be failover)
Complexity of Architecture and understanding: From oracle side there is no complexity as hardware cluster takes care about this
Set up costs: If management has already decided to have one server for each database, then no extra cost except the cost of license of the hardware cluster. But you would be underutilizing your one server and over utilizing your another server
Operational Costs: (None) (Excluding the dba/sa task to verify the failover)
Cost of Maintenance: (None) (Excluding the dba/sa task to verify the failover)
Approach - III : I will have DB1 and DB2 active on Node a. We could configure Dataguard (Logical or a physical standby) on node b in the production data centre.
Non Availability of the Database: Since there would be a manual intervantion here. Down time would be (Time taken to detect failure+ time taken to activate standby database)*2 (Since both the database needed to be failover)
Complexity of Architecture and understanding: Complex since data gaurd need to be setup.
Set up costs: License for extra two database and for data gaurd.
Operational Costs: (None)
Cost of Maintenance: Verification that log files are continuously shipped and that there is no lag.
Also note if the database is not in maximum protection mode, chances are that you would loose data. If they are in maximum protection mode, then there would be a performance impact to your production server.
Approach IV : I will have a two node cluster running RAC and will have ACTIVE- ACTIVE database DB1 and DB2 running on them.
Non Availability of the Database: None till the time both cluster goes down.
Complexity of Architecture and understanding: Complex since you would be using RAC
Set up costs: Am not sure about licenscing part of RAC
Operational Costs: None
Cost of Maintenance: Nearly double then using single node Database.
Personally I like the Approach IV. However, for business reasons, we need to make a decision based on quantitative analysis done.
If one of you have already done one such excercise, could you please share yoru experiences here?
Many Many thanks for all your guidence in this direction.
Regards!
SaratI have not come up with the exact number since it would depends on the infrastructure and hardware.
Moreover Non availibility of database was only due to the fact hardware is not available it does not include the non availibility of database due to the issue with database. If there is issue with database, then even the passive node would not be able to handle it except only inthe case of data guard.
Regards
Anurag -
Hi,
I need some advice on whats the best practice to implement a DR using Dataguard. I noticed most DR are using a primary and standby concept. I am looking into having one primary with two standby whereby standby A will come up if Primary is down and standby B only if standby A doesn't work. Any suggestion and pointers, if this should be done or just single standby?Hello Bala,
I am looking into having one primary with two standby whereby standby A will come up if Primary is down and standby B only if standby A doesn't work. Any suggestion and pointers, if this should be done or just single standby?
Well you are asking the wrong question at the wrong time.
The first question should be: Is there any business case that requires two standby databases like having 3 different data centers (one for primary and two different ones for the standby databases)?
If the answer is yes - then you should think about the implementation concept like SYNC or UN-SYNC mirror which requires different bandwith and implementations. SYNC mirror will affect the performance more and more with every additional standby database. Maybe it is sufficient to run only one in SYNC and one in UN-SYNC mode.
Having two standby databases just for the reason if one fails to have another one is not reasonable, because of if standby A fails its mostly because of a configuration issue or bug and in that cases both standby databases will have the same problem.
If you want multiple standby databases - i would highly recommend to use Oracle Data Guard Broker for managing this environment, because of it is getting more and more complex with every standby database by a Switch-/Failover.
If you want to get an idea of the performance impact - please check this "Best Practices" whitepaper:
http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-dataguardnetworkbestpr-134557.pdf
Regards
Stefan -
RAC & DataGuard (Physical Standby)
Hello all,
I'm trying to get a high level overview of how RAC & DataGuard would behave in the following configuration. I've written down my understanding of how things would work. Please correct me if I'm wrong.
1) 2 node RAC (Primary Database) with a single instance physical standby.
a) Same standby related init.ora parameters would have to configured on both primary rac nodes.
b) The redo apply service at the standby would merge the redo from the 2 threads from the primary and apply it to the standby to keep it in sync.
c) During switch over only one primary RAC instance should be up besides the standby instance.
d) During switch back again only one primary RAC instance should be up besides the standby instance.
e) During failover ofcourse both primary instances would be down which warrants the fail over.
2) 2 node RAC (Primary) with a 2 node physical standby
This where it gets complex and I'm not really sure how a,b,c,d,e would be in this scenario. Appreciate if you could shed some light on this and list what
a,b,c,d,e would look like in this scenario.
I'm assuming that only one instance in the standby RAC should be up when the standby is in RAC configuration for the redo apply to work.
Also, if there is a white paper that details step by step procedure for setting up the above 2 scenarios, please let me know. So far I was able to find only the MAA white paper but that was not very helpful. If you can prescribe a good book for RAC & DataGuard that would be great too..
Thanks for your help>
1) 2 node RAC (Primary Database) with a single instance physical standby.
a) Same standby related init.ora parameters would have to configured on both primary rac nodes.Usually rac nodes share their spfile on a shared diskvolume or through ASM. So they have identical DG parameter anyway
b) The redo apply service at the standby would merge the redo from the 2 threads from the primary and apply it to the standby to keep it in sync.Correct
c) During switch over only one primary RAC instance should be up besides the standby instance.
Sounds logical to me
Edit: In fact during normal operation both RAC nodes are up, so during switchover they are both active too
It is the configuration that matters. As soon as the standby is becoming primary, it might be that it talks only to one RAC node for its log-apply files
d) During switch back again only one primary RAC instance should be up besides the standby instance.That single was the only one running in step c.
e) During failover ofcourse both primary instances would be down which warrants the fail over.Correct. Keep in mind that during a failover the RAC config would be deconfigured from the DG setup and has to be added back again as standby.
You might have to take a look at: http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10g_RACPrimarySingleInstancePhysicalStandby.pdf
>
2) 2 node RAC (Primary) with a 2 node physical standby
This where it gets complex and I'm not really sure how a,b,c,d,e would be in this scenario. Appreciate if you could shed some light on this and list what
a,b,c,d,e would look like in this scenario.
I'm assuming that only one instance in the standby RAC should be up when the standby is in RAC configuration for the redo apply to work.
No. RAC primary to RAC Standby would mean both nodes are up on both side. PrimNode1 send its redo logs to StandbyNode1 and the same for PrimNode2 and StandbyNode2
Have a look here: http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10g_RACPrimaryRACPhysicalStandby.pdf
HTH,
FJFranken
My Blog: http://managingoracle.blogspot.com
Edited by: fjfranken on 16-jul-2010 1:13 -
Error in Creation of Dataguard for RAC
My pfile of RAC looks like:
RACDB2.__large_pool_size=4194304
RACDB1.__large_pool_size=4194304
RACDB2.__shared_pool_size=92274688
RACDB1.__shared_pool_size=92274688
RACDB2.__streams_pool_size=0
RACDB1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/RACDB/adump'
*.background_dump_dest='/u01/app/oracle/admin/RACDB/bdump'
*.cluster_database_instances=2
*.cluster_database=true
*.compatible='10.2.0.1.0'
*.control_files='+DATA/racdb/controlfile/current.260.627905745','+FLASH/racdb/controlfile/current.256.627905753'
*.core_dump_dest='/u01/app/oracle/admin/RACDB/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+DATA/RACDB','+DATADG/RACDG'
*.db_name='RACDB'
*.db_recovery_file_dest='+FLASH'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=RACDBXDB)'
*.fal_client='RACDB'
*.fal_server='RACDG'
RACDB1.instance_number=1
RACDB2.instance_number=2
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(RACDB,RACDG)'
*.log_archive_dest_1='LOCATION=+FLASH/RACDB/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACDB'
*.log_archive_dest_2='SERVICE=RACDG VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=RACDG'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='DEFER'
*.log_archive_format='%t_%s_%r.arc'
*.log_file_name_convert='+DATA/RACDB','+DATADG/RACDG'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_listener='LISTENERS_RACDB'
*.remote_login_passwordfile='exclusive'
*.service_names='RACDB'
*.sga_target=167772160
*.standby_file_management='AUTO'
RACDB2.thread=2
RACDB1.thread=1
*.undo_management='AUTO'
RACDB2.undo_tablespace='UNDOTBS2'
RACDB1.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/app/oracle/admin/RACDB/udump'
My pfile of Dataguard Instance in nomount state looks like:
RACDG.__db_cache_size=58720256
RACDG.__java_pool_size=4194304
RACDG.__large_pool_size=4194304
RACDG.__shared_pool_size=96468992
RACDG.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/RACDG/adump'
*.background_dump_dest='/u01/app/oracle/admin/RACDG/bdump'
##*.cluster_database_instances=2
##*.cluster_database=true
*.compatible='10.2.0.1.0'
##*.control_files='+DATA/RACDG/controlfile/current.260.627905745','+FLASH/RACDG/controlfile/current.256.627905753'
*.core_dump_dest='/u01/app/oracle/admin/RACDG/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATADG'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+DATADG/RACDG','+DATA/RACDB'
*.db_name='RACDB'
*.db_recovery_file_dest='+FLASHDG'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=RACDGXDB)'
*.FAL_CLIENT='RACDG'
*.FAL_SERVER='RACDB'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(RACDB,RACDG)'
*.log_archive_dest_1='LOCATION=+FLASHDG/RACDG/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACDG'
*.log_archive_dest_2='SERVICE=RACDB VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=RACDB'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.log_archive_format='%t_%s_%r.arc'
*.log_file_name_convert='+DATADG/RACDG','+DATA/RACDB'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
##*.remote_listener='LISTENERS_RACDG'
*.remote_login_passwordfile='exclusive'
SERVICE_NAMES='RACDG'
sga_target=167772160
standby_file_management='auto'
undo_management='AUTO'
undo_tablespace='UNDOTBS1'
user_dump_dest='/u01/app/oracle/admin/RACDG/udump'
DB_UNIQUE_NAME=RACDG
and here is what I am doing on the standby location:
[oracle@dg01 ~]$ echo $ORACLE_SID
RACDG
[oracle@dg01 ~]$ rman
Recovery Manager: Release 10.2.0.1.0 - Production on Tue Jul 17 21:19:21 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect auxiliary /
connected to auxiliary database: RACDG (not mounted)
RMAN> connect target sys/xxxxxxx@RACDB
connected to target database: RACDB (DBID=625522512)
RMAN> duplicate target database for standby;
Starting Duplicate Db at 2007-07-17 22:27:08
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=156 devtype=DISK
contents of Memory Script:
restore clone standby controlfile;
sql clone 'alter database mount standby database';
executing Memory Script
Starting restore at 2007-07-17 22:27:10
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /software/backup/ctl4.ctl
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/software/backup/ctl4.ctl tag=TAG20070717T201921
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:23
output filename=+DATADG/racdg/controlfile/current.275.628208075
output filename=+FLASHDG/racdg/controlfile/backup.268.628208079
Finished restore at 2007-07-17 22:27:34
sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/17/2007 22:27:43
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs2.265.627906771 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/example.264.627905917 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/users.259.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/sysaux.257.627905385 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs1.258.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/system.256.627905375 conflicts with a file used by the target database
RMAN>
Any help to clear this error will be apprecited.......
Message was edited by:
Bal
nullHi
Thanks everybody for helping me on this issue...........
As suggested, I had taken the parameter log_file_name_convert and db_file_name_convert out of my RAC primary database but still I am getting the same error.
Any help will be appriciated..............
SQL> show parameter convert
NAME TYPE VALUE
db_file_name_convert string
log_file_name_convert string
SQL>
oracle@dg01<3>:/u01/app/oracle> rman
Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jul 18 17:07:49 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect auxiliary /
connected to auxiliary database: RACDB (not mounted)
RMAN> connect target sys/xxx@RACDB
connected to target database: RACDB (DBID=625522512)
RMAN> duplicate target database for standby;
Starting Duplicate Db at 2007-07-18 17:10:53
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=156 devtype=DISK
contents of Memory Script:
restore clone standby controlfile;
sql clone 'alter database mount standby database';
executing Memory Script
Starting restore at 2007-07-18 17:10:54
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /software/backup/ctl5.ctr
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/software/backup/ctl5.ctr tag=TAG20070718T170529
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:33
output filename=+DATADG/racdg/controlfile/current.275.628208075
output filename=+FLASHDG/racdg/controlfile/backup.268.628208079
Finished restore at 2007-07-18 17:11:31
sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/18/2007 17:11:43
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs2.265.627906771 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/example.264.627905917 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/users.259.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/sysaux.257.627905385 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs1.258.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/system.256.627905375 conflicts with a file used by the target database -
Complex structures in Sender File adapter
Hi Experts
I am working on XI 3.0 SP 22. How do we handle the complex structures in sender file adapter in file content conversion.
Please help me out.
Regards
HariHi,
FCC can support upto max 3 levels, find below link for more help
http://help.sap.com/saphelp_nw70/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm
Content Conversion ( The Key Field Problem ) -
Hi everybody!
I have a problem. Some weeks ago I opened a post related to this issue. We have two dataguards with dataguard broker. One of them is resync (thanks to mseberg and this forum) and now I have problems with the other.
Once I have learned how to configure and start/stop dataguard broker, I have a more basic problem, which is to resync it. I follow a process, where I backup the primary with RMAN, I copy the rman files to the other server with the controlfile, at once, I recover with rman again.
The problem is that it is too big, 2 hours for backing it up more or less, and when I restore it, no archivelog list appears being syncronized.
I have followed the same process than the other one and I can't resync it. I think there is something at my params or something new at 11g version...
SQL> show parameters
NAME TYPE VALUE
O7_DICTIONARY_ACCESSIBILITY boolean FALSE
active_instance_count integer
aq_tm_processes integer 0
archive_lag_target integer 0
asm_diskgroups string
asm_diskstring string
asm_power_limit integer 1
asm_preferred_read_failure_groups string
audit_file_dest string /opt/oracle/admin/MN122010P/ad
ump
audit_sys_operations boolean FALSE
audit_syslog_level string
audit_trail string DB
background_core_dump string partial
background_dump_dest string /opt/oracle/diag/rdbms/mn12201
0p/MN122010P/trace
backup_tape_io_slaves boolean FALSE
bitmap_merge_area_size integer 1048576
blank_trimming boolean FALSE
buffer_pool_keep string
buffer_pool_recycle string
cell_offload_compaction string ADAPTIVE
cell_offload_parameters string
cell_offload_plan_display string AUTO
cell_offload_processing boolean TRUE
cell_partition_large_extents string DEFAULT
circuits integer
client_result_cache_lag big integer 3000
client_result_cache_size big integer 0
cluster_database boolean FALSE
cluster_database_instances integer 1
cluster_interconnects string
commit_logging string
commit_point_strength integer 1
commit_wait string
commit_write string
compatible string 11.1.0.0.0
control_file_record_keep_time integer 7
control_files string /opt/oracle/oradata/MN122010P/
controlfile/control01.ctl, /op
t/oracle/oradata1/MN122010P/co
ntrolfile/control02.ctl
control_management_pack_access string DIAGNOSTIC+TUNING
core_dump_dest string /opt/oracle/diag/rdbms/mn12201
0p/MN122010P/cdump
cpu_count integer 4
create_bitmap_area_size integer 8388608
create_stored_outlines string
cursor_sharing string EXACT
cursor_space_for_time boolean FALSE
db_16k_cache_size big integer 0
db_2k_cache_size big integer 0
db_32k_cache_size big integer 0
db_4k_cache_size big integer 0
db_8k_cache_size big integer 0
db_block_buffers integer 0
db_block_checking string FALSE
db_block_checksum string TYPICAL
db_block_size integer 8192
db_cache_advice string ON
db_cache_size big integer 0
db_create_file_dest string /opt/oracle/oradata
db_create_online_log_dest_1 string /opt/oracle/oradata
db_create_online_log_dest_2 string /opt/oracle/oradata1
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string
db_domain string domain.es
db_file_multiblock_read_count integer 69
db_file_name_convert string
db_files integer 200
db_flashback_retention_target integer 1440
db_keep_cache_size big integer 0
db_lost_write_protect string NONE
db_name string MN122010
db_recovery_file_dest string /opt/oracle/oradata/flash_reco
very_area
db_recovery_file_dest_size big integer 100G
db_recycle_cache_size big integer 0
db_securefile string PERMITTED
db_ultra_safe string OFF
db_unique_name string MN122010P
db_writer_processes integer 1
dbwr_io_slaves integer 0
ddl_lock_timeout integer 0
dg_broker_config_file1 string /opt/oracle/product/db111/dbs/
dr1MN122010P.dat
dg_broker_config_file2 string /opt/oracle/product/db111/dbs/
dr2MN122010P.dat
dg_broker_start boolean FALSE
diagnostic_dest string /opt/oracle
disk_asynch_io boolean TRUE
dispatchers string (PROTOCOL=TCP) (SERVICE=MN1220
10PXDB)
distributed_lock_timeout integer 60
dml_locks integer 844
drs_start boolean FALSE
enable_ddl_logging boolean FALSE
event string
fal_client string
fal_server string
fast_start_io_target integer 0
fast_start_mttr_target integer 0
fast_start_parallel_rollback string LOW
file_mapping boolean FALSE
fileio_network_adapters string
filesystemio_options string none
fixed_date string
gc_files_to_locks string
gcs_server_processes integer 0
global_context_pool_size string
global_names boolean FALSE
global_txn_processes integer 1
hash_area_size integer 131072
hi_shared_memory_address integer 0
hs_autoregister boolean TRUE
ifile file
instance_groups string
instance_name string MN122010P
instance_number integer 0
instance_type string RDBMS
java_jit_enabled boolean TRUE
java_max_sessionspace_size integer 0
java_pool_size big integer 0
java_soft_sessionspace_limit integer 0
job_queue_processes integer 1000
large_pool_size big integer 0
ldap_directory_access string NONE
ldap_directory_sysauth string no
license_max_sessions integer 0
license_max_users integer 0
license_sessions_warning integer 0
local_listener string LISTENER_MN122010P
lock_name_space string
lock_sga boolean FALSE
log_archive_config string dg_config=(MN122010P,MN122010R
,MN12201R)
log_archive_dest string
log_archive_dest_1 string location="USE_DB_RECOVERY_FILE
_DEST", valid_for=(ALL_LOGFIL
ES,ALL_ROLES)
log_archive_dest_10 string
log_archive_dest_2 string service=MN12201R, LGWR SYNC AF
FIRM delay=0 OPTIONAL compress
ion=DISABLE max_failure=0 max_
connections=1 reopen=300 db_
unique_name=MN12201R net_timeo
ut=30 valid_for=(online_logfi
le,primary_role)
log_archive_dest_3 string
log_archive_dest_4 string
log_archive_dest_5 string
log_archive_dest_6 string
log_archive_dest_7 string
log_archive_dest_8 string
log_archive_dest_9 string
log_archive_dest_state_1 string ENABLE
log_archive_dest_state_10 string enable
log_archive_dest_state_2 string ENABLE
log_archive_dest_state_3 string ENABLE
log_archive_dest_state_4 string enable
log_archive_dest_state_5 string enable
log_archive_dest_state_6 string enable
log_archive_dest_state_7 string enable
log_archive_dest_state_8 string enable
log_archive_dest_state_9 string enable
log_archive_duplex_dest string
log_archive_format string %t_%s_%r.dbf
log_archive_local_first boolean TRUE
log_archive_max_processes integer 4
log_archive_min_succeed_dest integer 1
log_archive_start boolean FALSE
log_archive_trace integer 0
log_buffer integer 7668736
log_checkpoint_interval integer 0
log_checkpoint_timeout integer 1800
log_checkpoints_to_alert boolean FALSE
log_file_name_convert string
max_commit_propagation_delay integer 0
max_dispatchers integer
max_dump_file_size string unlimited
max_enabled_roles integer 150
max_shared_servers integer
memory_max_target big integer 512M
memory_target big integer 512M
nls_calendar string
nls_comp string BINARY
nls_currency string
nls_date_format string
nls_date_language string
nls_dual_currency string
nls_iso_currency string
nls_language string AMERICAN
nls_length_semantics string BYTE
nls_nchar_conv_excp string FALSE
nls_numeric_characters string
nls_sort string
nls_territory string AMERICA
nls_time_format string
nls_time_tz_format string
nls_timestamp_format string
nls_timestamp_tz_format string
object_cache_max_size_percent integer 10
object_cache_optimal_size integer 102400
olap_page_pool_size big integer 0
open_cursors integer 300
open_links integer 4
open_links_per_instance integer 4
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.1.0.7
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
os_authent_prefix string ops$
os_roles boolean FALSE
parallel_adaptive_multi_user boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2152
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 40
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_threads_per_cpu integer 2
pga_aggregate_target big integer 0
plscope_settings string IDENTIFIERS:NONE
plsql_ccflags string
plsql_code_type string INTERPRETED
plsql_debug boolean FALSE
plsql_native_library_dir string
plsql_native_library_subdir_count integer 0
plsql_optimize_level integer 2
plsql_v2_compatibility boolean FALSE
plsql_warnings string DISABLE:ALL
pre_page_sga boolean FALSE
processes integer 170
query_rewrite_enabled string TRUE
query_rewrite_integrity string enforced
rdbms_server_dn string
read_only_open_delayed boolean FALSE
recovery_parallelism integer 0
recyclebin string on
redo_transport_user string
remote_dependencies_mode string TIMESTAMP
remote_listener string
remote_login_passwordfile string EXCLUSIVE
remote_os_authent boolean FALSE
remote_os_roles boolean FALSE
replication_dependency_tracking boolean TRUE
resource_limit boolean FALSE
resource_manager_cpu_allocation integer 4
resource_manager_plan string
result_cache_max_result integer 5
result_cache_max_size big integer 1312K
result_cache_mode string MANUAL
result_cache_remote_expiration integer 0
resumable_timeout integer 0
rollback_segments string
sec_case_sensitive_logon boolean TRUE
sec_max_failed_login_attempts integer 10
sec_protocol_error_further_action string CONTINUE
sec_protocol_error_trace_action string TRACE
sec_return_server_release_banner boolean FALSE
serial_reuse string disable
service_names string MN122010P.domain.es
session_cached_cursors integer 50
session_max_open_files integer 10
sessions integer 192
sga_max_size big integer 512M
sga_target big integer 0
shadow_core_dump string partial
shared_memory_address integer 0
shared_pool_reserved_size big integer 10066329
shared_pool_size big integer 0
shared_server_sessions integer
shared_servers integer 1
skip_unusable_indexes boolean TRUE
smtp_out_server string
sort_area_retained_size integer 0
sort_area_size integer 65536
spfile string /opt/oracle/product/db111/dbs/
spfileMN122010P.ora
sql92_security boolean FALSE
sql_trace boolean FALSE
sql_version string NATIVE
sqltune_category string DEFAULT
standby_archive_dest string ?/dbs/arch
standby_file_management string AUTO
star_transformation_enabled string FALSE
statistics_level string TYPICAL
streams_pool_size big integer 0
tape_asynch_io boolean TRUE
thread integer 0
timed_os_statistics integer 0
timed_statistics boolean TRUE
trace_enabled boolean TRUE
tracefile_identifier string
transactions integer 211
transactions_per_rollback_segment integer 5
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
use_indirect_data_buffers boolean FALSE
user_dump_dest string /opt/oracle/diag/rdbms/mn12201
0p/MN122010P/trace
utl_file_dir string
workarea_size_policy string AUTO
xml_db_events string enable
I have tested the connectivity between them and it's ok, I recreated the password file
[oracle@servername01 MN122010P]$ sqlplus "sys/[email protected] as sysdba"
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
HOST_NAME
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
1 MN122010P
servername01
11.1.0.7.0 09-OCT-11 OPEN NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
[oracle@servername01 MN122010P]$ sqlplus "sys/[email protected] as sysdba"
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
HOST_NAME
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
1 MN12201R
servername02
11.1.0.7.0 28-NOV-11 MOUNTED NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
Recovery Manager: Release 11.1.0.7.0 - Production on Thu Dec 1 10:16:23 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
RMAN> connect target /
connected to target database: MN122010 (DBID=2440111267)
RMAN> run{
ALLOCATE CHANNEL d1 DEVICE TYPE DISK FORMAT '/opt/oracle/oradata/BACKUPS_01/MN122010P/backup_%d_t%t_s%s_p%p';
BACKUP DATABASE PLUS ARCHIVELOG;
2> 3> 4>
using target database control file instead of recovery catalog
allocated channel: d1
channel d1: SID=140 device type=DISK
Starting backup at 01-DEC-11
current log archived
channel d1: starting archived log backup set
channel d1: specifying archived log(s) in backup set
input archived log thread=1 sequence=4117 RECID=7260 STAMP=766935608
input archived log thread=1 sequence=4118 RECID=7261 STAMP=766935619
input archived log thread=1 sequence=4119 RECID=7262 STAMP=766935630
input archived log thread=1 sequence=4120 RECID=7263 STAMP=766935635
....List of archives....
Starting backup at 01-DEC-11
channel d1: starting full datafile backup set
channel d1: specifying datafile(s) in backup set
input datafile file number=00010 name=/opt/oracle/oradata/MN122010P/TBCESPANDM_01.DBF
input datafile file number=00009 name=/opt/oracle/oradata/MN122010P/CESPAROUTING_01.DBF
input datafile file number=00007 name=/opt/oracle/oradata/MN122010P/TBCESPACALLEJERO_01.DBF
input datafile file number=00008 name=/opt/oracle/oradata/MN122010P/CESPAGEOCODER_01.DBF
input datafile file number=00001 name=/opt/oracle/oradata/MN122010P/system01.dbf
input datafile file number=00002 name=/opt/oracle/oradata/MN122010P/sysaux01.dbf
input datafile file number=00003 name=/opt/oracle/oradata/MN122010P/undotbs01.dbf
input datafile file number=00006 name=/opt/oracle/oradata/MN122010P/TBCESPAFONDO_01.DBF
input datafile file number=00005 name=/opt/oracle/oradata/MN122010P/TBCESPAPOIS_01.DBF
input datafile file number=00004 name=/opt/oracle/oradata/MN122010P/users01.dbf
channel d1: starting piece 1 at 01-DEC-11
channel d1: finished piece 1 at 01-DEC-11
piece handle=/opt/oracle/oradata/BACKUPS_01/MN122010P/backup_MN122010_t768739341_s768_p1 tag=TAG20111201T104221 comment=NONE
channel d1: backup set complete, elapsed time: 00:39:26
Finished backup at 01-DEC-11
Starting backup at 01-DEC-11
current log archived
channel d1: starting archived log backup set
channel d1: specifying archived log(s) in backup set
input archived log thread=1 sequence=4256 RECID=7399 STAMP=768741707
channel d1: starting piece 1 at 01-DEC-11
channel d1: finished piece 1 at 01-DEC-11
piece handle=/opt/oracle/oradata/BACKUPS_01/MN122010P/backup_MN122010_t768741708_s769_p1 tag=TAG20111201T112148 comment=NONE
channel d1: backup set complete, elapsed time: 00:00:01
Finished backup at 01-DEC-11
Starting Control File and SPFILE Autobackup at 01-DEC-11
piece handle=/opt/oracle/product/db111/dbs/c-2440111267-20111201-00 comment=NONE
Finished Control File and SPFILE Autobackup at 01-DEC-11
released channel: d1
I made a alter database create standby controlfile as at Primary and at Standby:
SQL> shutdown immediate;
ORA-01109: base de datos sin abrir
Base de datos desmontada.
Instancia ORACLE cerrada.
SQL> startup nomount;
Instancia ORACLE iniciada.
Total System Global Area 2937555928 bytes
Fixed Size 744408 bytes
Variable Size 1862270976 bytes
Database Buffers 1073741824 bytes
Redo Buffers 798720 bytes
copy the controlfile to standby controlfile locations
startup standby
ALTER DATABASE MOUNT STANDBY DATABASE;
And restoring with rman
Restoring
List of Archived Logs in backup set 616
Thrd Seq Low SCN Low Time Next SCN Next Time
1 4256 27049296 01-DEC-11 27052551 01-DEC-11
RMAN> run{
2> allocate channel c1 type disk format '/opt/oracle/oradata/BACKUPS_01/MN122010P/backup_%d_t%t_s%s_p%p';
3> restore database;
4> recover database until sequence 4256 thread 1;
5> sql 'alter database recover managed standby database disconnect from session';
6> release channel c1;
7> }
allocated channel: c1
channel c1: SID=164 device type=DISK
Starting restore at 01-DEC-11
Starting implicit crosscheck backup at 01-DEC-11
Crosschecked 115 objects
Finished implicit crosscheck backup at 01-DEC-11
Starting implicit crosscheck copy at 01-DEC-11
Crosschecked 24 objects
Finished implicit crosscheck copy at 01-DEC-11
searching for all files in the recovery area
cataloging files...
no files cataloged
channel c1: starting datafile backup set restore
channel c1: specifying datafile(s) to restore from backup set
channel c1: restoring datafile 00001 to /opt/oracle/oradata/MN122010P/system01.dbf
channel c1: restoring datafile 00002 to /opt/oracle/oradata/MN122010P/sysaux01.dbf
channel c1: restoring datafile 00003 to /opt/oracle/oradata/MN122010P/undotbs01.dbf
channel c1: restoring datafile 00004 to /opt/oracle/oradata/MN122010P/users01.dbf
channel c1: restoring datafile 00005 to /opt/oracle/oradata/MN122010P/TBCESPAPOIS_01.DBF
channel c1: restoring datafile 00006 to /opt/oracle/oradata/MN122010P/TBCESPAFONDO_01.DBF
channel c1: restoring datafile 00007 to /opt/oracle/oradata/MN122010P/TBCESPACALLEJERO_01.DBF
channel c1: restoring datafile 00008 to /opt/oracle/oradata/MN122010P/CESPAGEOCODER_01.DBF
channel c1: restoring datafile 00009 to /opt/oracle/oradata/MN122010P/CESPAROUTING_01.DBF
channel c1: restoring datafile 00010 to /opt/oracle/oradata/MN122010P/TBCESPANDM_01.DBF
channel c1: reading from backup piece /opt/oracle/oradata/BACKUPS_01/MN122010P/backup_MN122010_t768739341_s768_p1
After the restoring I found at standby that no archives have been applied:
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME,APPLIED
FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#
/ 2 3
no rows selected
SQL> select * from v$Instance;
INSTANCE_NUMBER INSTANCE_NAME
HOST_NAME
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
1 MN12201R
server02
11.1.0.7.0 01-DEC-11 MOUNTED NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
SQL> select message from v$dataguard_status;
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC3: Archival started
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
ARC1: Becoming the heartbeat ARCH
7 rows selected.
On primary
MESSAGE
ARC3: Beginning to archive thread 1 sequence 4258 (27056314-27064244)
ARC3: Completed archiving thread 1 sequence 4258 (27056314-27064244)
ARC0: Beginning to archive thread 1 sequence 4259 (27064244-27064251)
ARC0: Completed archiving thread 1 sequence 4259 (27064244-27064251)
ARC2: Beginning to archive thread 1 sequence 4260 (27064251-27064328)
ARC2: Completed archiving thread 1 sequence 4260 (27064251-27064328)
ARC3: Beginning to archive thread 1 sequence 4261 (27064328-27064654)
ARC3: Completed archiving thread 1 sequence 4261 (27064328-27064654)
Edited by: user8898355 on 01-dic-2011 7:02I'm seeing those errors at primary
LNSb started with pid=20, OS id=30141
LGWR: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (16086)
LGWR: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
trace file:
*** 2011-12-02 09:52:17.164
*** SESSION ID:(183.1) 2011-12-02 09:52:17.164
*** CLIENT ID:() 2011-12-02 09:52:17.164
*** SERVICE NAME:(SYS$BACKGROUND) 2011-12-02 09:52:17.164
*** MODULE NAME:() 2011-12-02 09:52:17.164
*** ACTION NAME:() 2011-12-02 09:52:17.164
*** TRACE FILE RECREATED AFTER BEING REMOVED ***
*** 2011-12-02 09:52:17.164 6465 krsu.c
Initializing NetServer[LNSb] for dest=MN12201R.domain.es mode SYNC
LNSb is not running anymore.
New SYNC LNSb needs to be started
Waiting for subscriber count on LGWR-LNSb channel to go to zero
Subscriber count went to zero - time now is <12/02/2011 09:52:17>
Starting LNSb ...
Waiting for LNSb [pid 30141] to initialize itself
*** TRACE FILE RECREATED AFTER BEING REMOVED ***
*** 2011-12-02 09:52:17.164 6465 krsu.c
Initializing NetServer[LNSb] for dest=MN12201R.domain.es mode SYNC
LNSb is not running anymore.
New SYNC LNSb needs to be started
Waiting for subscriber count on LGWR-LNSb channel to go to zero
Subscriber count went to zero - time now is <12/02/2011 09:52:17>
Starting LNSb ...
Waiting for LNSb [pid 30141] to initialize itself
*** 2011-12-02 09:52:20.185
*** 2011-12-02 09:52:20.185 6828 krsu.c
Netserver LNSb [pid 30141] for mode SYNC has been initialized
Performing a channel reset to ignore previous responses
Successfully started LNSb [pid 30141] for dest MN12201R.domain.es mode SYNC ocis=0x2ba2cb1fece8
*** 2011-12-02 09:52:20.185 2880 krsu.c
Making upiahm request to LNSb [pid 30141]: Begin Time is <12/02/2011 09:52:17>. NET_TIMEOUT = <30> seconds
Waiting for LNSb to respond to upiahm
*** 2011-12-02 09:52:20.262 3044 krsu.c
upiahm connect done status is 0
Receiving message from LNSb
Receiving message from LNSb
LGWR: Failed
rfsp: 0x2ba2ca55c328
rfsmod: 2
rfsver: 3
rfsflag: 0x24882 -
Problem in calling a WS with complex type
Hi all...
I have to invoke a WS that has as input type a complex type defined in the wsdl...
<complexType name="LoginInfo">
- <sequence>
<element name="appCode" nillable="true" type="string" />
<element name="login" nillable="true" type="string" />
<element name="passwd" nillable="true" type="string" />
</sequence>
</complexType>the soapui request looks like this:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://com.susan/SusanWS/types">
<soapenv:Header/>
<soapenv:Body>
<typ:LoginWebService>
<LoginInfo_1>
<appCode>WEB_SERVICES</appCode>
<login>root</login>
<passwd>root</passwd>
</LoginInfo_1>
</typ:LoginWebService>
</soapenv:Body>
</soapenv:Envelope>in my java code I'm trying to call it with:
Service service = new Service();
Call call = (Call)service.createCall();
call.setTargetEndpointAddress( new URL( wsEndpoint ) );
// call.setOperationName( wsMethod );
call.setOperationName( new QName("http://com.susan/SusanWS/types",wsMethod));
call.addParameter( "LoginInfo_1", Constants.XSD_ANYTYPE, ParameterMode.IN );
// call.addParameter( "appCode", Constants.XSD_STRING, ParameterMode.IN );
// call.addParameter( "login", Constants.XSD_STRING, ParameterMode.IN );
// call.addParameter( "passwd", Constants.XSD_STRING, ParameterMode.IN );
String[] params={appCode, login, passwd};
// call.setReturnType( Constants.XSD_INT );
// Object retval = call.invoke( new String[] {appCode, login, passwd} );
Object retval = call.invoke( new Object[] { params } );doing so..it doesn't work...the first problem I can see...is that I don't assign a parameter name to the 3 strings I pass in the param array...
anybody has a tip to give me on how to solve this problem?solved...
I imported the wsdl into Intellij idea...which created all the needed classes, interfaces,...and used service locator and endpoint binding stubs... -
Cannot assign value to a Variable of Complex Type beyond index 1
Hello:
I have a variable defined as a complex type as followed. I tried to assign a value to each of the two elements but it only allows me to assign to the 'element#1.
This statement that tries to assign a value into element#2 will not work, if I assign with '[1]' for the first element it will work:
<copy> <---- THIS WORKS
<from expression="'John'"/>
<to variable="My_Variable"
part="My_Collection"
query="/ns9:My_Collection/ns9:Collection/ns9:Collection_Item[1]/ns9:pname"/>
</copy>
<copy> <---- THIS DOES NOT WORK
<from expression="'John'"/>
<to variable="My_Variable"
part="My_Collection"
query="/ns9:My_Collection/ns9:Collection/ns9:Collection_Item[2]/ns9:pname"/>
</copy>
Is there something wrong with my definition below that allows only element#1 to be refererenced but not element#2???? Am I missing some kind of initialization that is needed to initialize both elements????
Here are my message and Complex Type definitions:
<variable name="My_Variable" messageType="ns8:args_out_msg"/>
<message name="args_out_msg">
<part name="My_Collection" element="db:My_Collection"/>
</message>
<element name="My_Collection">
<complexType>
<sequence>
<element name="Collection" type="db:Collection_Type" db:index="2" db:type="Array" minOccurs="0" nillable="true"/>
<element name="Ret" type="string" db:index="3" db:type="VARCHAR2" minOccurs="0" nillable="true"/>
</sequence>
</complexType>
</element>
<complexType name="Collection_Type">
<sequence>
<element name="Collection_Item" type="db:Collection_Type_Struct" db:type="Struct" minOccurs="0" maxOccurs="unbounded" nillable="true"/>
</sequence>
</complexType>
<complexType name="Collection_Type_Struct">
<sequence>
<element name="pname" db:type="VARCHAR2" minOccurs="0" nillable="true">
<simpleType>
<restriction base="string">
<maxLength value="25"/>
</restriction>
</simpleType>
</element>
</sequence>
</complexType>
The error msg it gives me is as followed:
[2010/09/04 00:47:59] Error in <assign> expression: <to> value is empty at line "254". The XPath expression : "" returns zero node, when applied to document shown below:less
oracle.xml.parser.v2.XMLElement@1fa7874
[2010/09/04 00:47:59] "{http://schemas.xmlsoap.org/ws/2003/03/business-process/}selectionFailure" has been thrown.less
-<selectionFailure xmlns="http://schemas.xmlsoap.org/ws/2003/03/business-process/">
-<part name="summary">
<summary>
XPath query string returns zero node.
According to BPEL4WS spec 1.1 section 14.3, The assign activity <to> part query should not return zero node.
Please check the BPEL source at line number "254" and verify the <to> part xpath query.
</summary>
</part>
</selectionFailure>
Thanks
NewbieHello:
Base on the suggestion to use 'append' instead of 'copy', I tried to define a 'singleNode' which is of type 'Collection_Type_Struct' so I can append this individual 'struct' into my array (i.e. as the 2nd. element of my array "/ns9:My_Collection/ns9:Collection/ns9:Collection_Item"), but I am getting an error in defining this variable as:
<variable name="singleNode" element="Collection_Type_Struct"/> <--- error
Can someone tell me how should I define "singleNode" so I can put a value in it and then append this 'singleNode' into the array:
<variable name="singleNode" element=" how to define this????"/>
<assign>
<copy>
<frem expression="'Element2Value'"/>
<to variable="singleNode"
part="My_Collection"
query="/ns9:My_Collection/ns9:Collection/ns9:Collection_Item/ns9:pname"/>
</copy>
</assign>
<bpelx:assign>
<bpelx:append>
<from variable="singleNode" query="/ns9:My_Collection/ns9:Collection/ns9:Collection_Item"/>
<to variable="My_Variable"
"part="My_Collection"
query="/ns9:My_Collection/ns9:Collection"/>
</bpelx:append>
</bpelx:assign>
Again here is my definition in my .xsd file:
<element name="My_Collection">
<complexType>
<sequence>
<element name="Collection" type="db:Collection_Type" db:index="2" db:type="Array" minOccurs="0" nillable="true"/>
<element name="Ret" type="string" db:index="3" db:type="VARCHAR2" minOccurs="0" nillable="true"/>
</sequence>
</complexType>
</element>
<complexType name="Collection_Type">
<sequence>
<element name="Collection_Item" type="db:Collection_Type_Struct" db:type="Struct" minOccurs="0" maxOccurs="unbounded" nillable="true"/>
</sequence>
</complexType>
<complexType name="Collection_Type_Struct">
<sequence>
<element name="pname" db:type="VARCHAR2" minOccurs="0" nillable="true">
<simpleType>
<restriction base="string">
<maxLength value="25"/>
</restriction>
</simpleType>
</element>
</sequence>
</complexType>
Thanks for any help!!!! -
hi guru's
i had prepared two complex reports seperately having the same Selection-screen , internal tables and declerations...now i have to combine both the reports into one single report....based upon <b>one field (i.e, filed PROCESS_TYPE)</b> of Selection-criteria(i.e, S_PR_TYP ) i have to display 2 outputs..One for SHC and another for CONF.....But the Logic and Header display for the 2 output's is different ........please let me know where should i write the logic ....and how the logic should be build....
the code is as follows :
<u><b>The code which is common for both the reports:</b></u>
$$********************************************************************
$$ TABLES DECLERATION
$$********************************************************************
TABLES: crmd_orderadm_h,
crmd_orderadm_i,
bbp_pdigp.
$$********************************************************************
$$ TYPE-POOLS
$$********************************************************************
TYPE-POOLS: slis, list.
$$********************************************************************
$$ GLOBAL TYPES
$$********************************************************************
TYPES: BEGIN OF y_str1,
CLIENT TYPE CRMD_ORDERADM_H-CLIENT,
guid TYPE crmd_orderadm_h-guid,
object_id TYPE crmd_orderadm_h-object_id,
object_type TYPE crmd_orderadm_h-object_type,
process_type TYPE crmd_orderadm_h-process_type,
created_at TYPE crmd_orderadm_h-created_at,
changed_at TYPE crmd_orderadm_h-changed_at,
archiving_flag TYPE crmd_orderadm_h-archiving_flag,
deliv_date TYPE bbp_pdigp-deliv_date,
final_entry TYPE bbp_pdigp-final_entry,
del_ind TYPE bbp_pdigp-del_ind,
END OF y_str1.
TYPES: BEGIN OF y_str2,
guid1 TYPE crmd_orderadm_h-guid,
object_id TYPE crmd_orderadm_h-object_id,
object_type TYPE crmd_orderadm_h-object_type,
process_type TYPE crmd_orderadm_h-process_type,
created_at TYPE crmd_orderadm_h-created_at,
changed_at TYPE crmd_orderadm_h-changed_at,
archiving_flag TYPE crmd_orderadm_h-archiving_flag,
guid2 TYPE crmd_orderadm_i-guid,
header TYPE crmd_orderadm_i-header,
guid3 TYPE bbp_pdigp-guid,
deliv_date TYPE bbp_pdigp-deliv_date,
final_entry TYPE bbp_pdigp-final_entry,
del_ind TYPE bbp_pdigp-del_ind,
END OF y_str2.
$$********************************************************************
$$ GLOBAL CONSTANTS
$$********************************************************************
CONSTANTS: C_BLANK_F(1) TYPE C VALUE 'X',
C_DEL_IND_F(1) TYPE C VALUE 'X',
C_ARCHIVING_FLAG(1) TYPE C VALUE 'X',
C_FINAL_ENTRY_F(1) TYPE C VALUE 'X',
C_FINAL_ENTRY_SPACE(1) TYPE C VALUE ' ',
C_CBA_SPACE(1) TYPE C VALUE ' ',
C_DEL_SPACE(1) TYPE C VALUE ' '.
$$********************************************************************
$$ Global Elementary Variables
$$********************************************************************
DATA: w_ld_lines TYPE i,
w_ld_linesc(10) TYPE c,
w_del_ind TYPE c,
w_final_entry TYPE c,
w_COUNT_cba TYPE I VALUE 0,
w_count_f TYPE I VALUE 0,
W_BLANK_F TYPE C,
W_FINAL_ENTRY_F TYPE C,
W_DEL_COUNT TYPE I VALUE 0,
W_PER_CBA1 TYPE P decimals 3,
W_PER_CBA TYPE P decimals 2,
W_PER_E_LINE TYPE I,
W_N TYPE I.
$$********************************************************************
$$ GLOBAL INTERNAL TABLES (custom structure)
$$********************************************************************
DATA: t_str_sc1 TYPE STANDARD TABLE OF y_str1 INITIAL SIZE 1.
DATA: t_str_sc2 TYPE STANDARD TABLE OF y_str2 INITIAL SIZE 1.
DATA: t_header TYPE slis_t_listheader,
w_header TYPE slis_listheader,
e_line LIKE w_header-info.
DATA: v_index LIKE SY-TABIX.
v_index = '1'.
$$********************************************************************
$$ GLOBAL FIELD-SYMBOLS
$$********************************************************************
FIELD-SYMBOLS: <FS_STR1> TYPE Y_STR1,
<FS_STR2> TYPE Y_STR2.
$$********************************************************************
$$ PARAMETERS & SELECT-OPTIONS
$$********************************************************************
SELECTION-SCREEN: BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
SELECT-OPTIONS: s_scno FOR crmd_orderadm_h-object_id,
s_pr_typ FOR crmd_orderadm_h-process_type NO INTERVALS NO DATABASE SELECTION NO-EXTENSION DEFAULT 'SHC',
s_change FOR crmd_orderadm_h-changed_at.
SELECTION-SCREEN END OF BLOCK b1.
$$********************************************************************
$$ START-OF-SELECTION
$$********************************************************************
START-OF-SELECTION.
REFRESH t_str_sc1.
SELECT client
guid
object_id
object_type
process_type
created_at
changed_at
archiving_flag
FROM crmd_orderadm_h INTO TABLE t_str_sc1
WHERE object_id IN s_scno AND changed_at IN s_change AND process_type IN s_pr_typ.
IF sy-subrc <> 0.
MESSAGE I002.
ENDIF.
LOOP AT t_str_sc1 ASSIGNING <FS_STR1>.
REFRESH t_str_sc2.
SELECT a~guid
a~object_id
a~object_type
a~process_type
a~created_at
a~changed_at
a~archiving_flag
b~guid
b~header
c~guid
c~deliv_date
c~final_entry
c~del_ind
INTO TABLE t_str_sc2
FROM crmd_orderadm_h AS a INNER JOIN crmd_orderadm_i AS b
ON aguid eq bheader INNER JOIN bbp_pdigp AS c
ON bguid eq cguid
WHERE a~guid eq <FS_STR1>-guid.
<u><b>THE LOGIC FOR FIRST REPORT:</b></u>
*"logic for displaying Delivery date at Header level
SORT T_STR_SC2 BY DELIV_DATE.
DESCRIBE TABLE T_STR_SC2 LINES W_N.
READ TABLE T_STR_SC2 WITH KEY DELIV_DATE = T_STR_SC2-DELIV_DATE INTO <FS_STR2>-deliv_date.
READ TABLE T_STR_SC2 INDEX v_index ASSIGNING <FS_STR2>.
IF SY-SUBRC = 0.
<FS_STR1>-deliv_date = <FS_STR2>-deliv_date.
MODIFY T_STR_SC1 FROM <FS_STR1> TRANSPORTING DELIV_DATE.
ENDIF.
*"Setting up the flags for the entire items in CRMD_ORDERADM_H as per the scenario
LOOP AT T_STR_SC2 ASSIGNING <FS_STR2> WHERE HEADER EQ <FS_STR1>-GUID.
IF <FS_STR2>-DEL_IND NE 'X'.
IF <FS_STR2>-FINAL_ENTRY NE 'X'.
W_BLANK_f = C_BLANK_F.
ELSE.
W_FINAL_ENTRY_F = C_FINAL_ENTRY_F.
ENDIF.
ENDIF.
ENDLOOP.
*"Logic started at item level
LOOP AT T_STR_SC2 ASSIGNING <FS_STR2> WHERE HEADER EQ <FS_STR1>-GUID.
IF W_BLANK_F NE 'X'.
IF W_FINAL_ENTRY_F NE 'X'.
*" Displaying the status for Del 'X' , Final_entry ' ', Archive_flag 'X'.
<FS_STR1>-DEL_IND = C_DEL_IND_F.
W_DEL_COUNT = W_DEL_COUNT + 1.
<FS_STR1>-FINAL_ENTRY = C_FINAL_ENTRY_SPACE.
<FS_STR1>-ARCHIVING_FLAG = C_ARCHIVING_FLAG.
w_COUNT_cba = w_COUNT_cba + 1.
MODIFY T_STR_SC1 FROM <FS_STR1> TRANSPORTING DEL_IND FINAL_ENTRY ARCHIVING_FLAG.
ELSE.
*" Displaying the status for Del ' ' , Final_entry 'X', Archive_flag 'X'.
<FS_STR1>-FINAL_ENTRY = C_FINAL_ENTRY_F.
w_count_f = w_count_f + 1.
<FS_STR1>-DEL_IND = C_DEL_SPACE.
<FS_STR1>-ARCHIVING_FLAG = C_ARCHIVING_FLAG.
w_COUNT_cba = w_COUNT_cba + 1.
MODIFY T_STR_SC1 FROM <FS_STR1> TRANSPORTING FINAL_ENTRY DEL_IND ARCHIVING_FLAG.
ENDIF.
ELSE.
*" Displaying the status for Del ' ' , Final_entry ' ', Archive_flag ' '.
<FS_STR1>-DEL_IND = C_DEL_SPACE.
<FS_STR1>-FINAL_ENTRY = C_FINAL_ENTRY_SPACE.
<FS_STR1>-ARCHIVING_FLAG = C_CBA_SPACE.
MODIFY T_STR_SC1 FROM <FS_STR1> TRANSPORTING DEL_IND FINAL_ENTRY ARCHIVING_FLAG .
ENDIF.
ENDLOOP. "end of t_str_sc2
if <FS_STR1>-DEL_IND eq C_DEL_IND_F.
W_DEL_COUNT = W_DEL_COUNT + 1.
endif.
if <FS_STR1>-FINAL_ENTRY eq C_FINAL_ENTRY_F.
w_count_f = w_count_f + 1.
endif.
if <FS_STR1>-ARCHIVING_FLAG eq C_ARCHIVING_FLAG.
w_COUNT_cba = w_COUNT_cba + 1.
endif.
CLEAR: W_BLANK_F , W_FINAL_ENTRY_F.
*"Logic ended at item level
ENDLOOP. "end of t_str_sc1
*" when Transaction type is SHC
IF <FS_STR1>-process_type EQ 'SHC'.
DESCRIBE TABLE t_str_sc1 LINES w_ld_lines.
w_ld_linesc = w_ld_lines.
CONCATENATE ' TOTAL NO OF RECORDS SELECTED:' w_ld_linesc INTO e_line SEPARATED BY space.
*" Percentage of Archived SC's
W_PER_E_LINE = w_ld_lines.
W_PER_CBA1 = W_COUNT_CBA / W_PER_E_LINE.
W_PER_CBA = W_PER_CBA1 * 100.
*" Displaying the total no of records fetched for DB
FORMAT COLOR 7.
WRITE:/9 e_line .
WRITE:/10 'TOTAL NO OF FINAL ENTRIES SELECTED:', w_count_f.
WRITE:/10 'TOTAL NO OF DELETE ENTRIES SELECTED:', W_DEL_COUNT.
WRITE:/10 'TOTAL NO OF ENTRIES SELECTED FOR ARCHIVING:',w_COUNT_cba.
SKIP.
WRITE:/10 'PERCENTAGE OF CAN BE ARCHIVED:',W_PER_CBA,'%'.
FORMAT COLOR 3.
SKIP.
WRITE:/30 '#### SC HAVING FINAL ENTRY INDICATOR FOR ALL ITEM IN SRM #####'.
FORMAT COLOR OFF.
WRITE:/30(63) SY-ULINE.
ULINE.
*" Displaying Headings for the Report
NEW-LINE SCROLLING.
WRITE:/3 'Transaction No', 18 sy-vline,
19 'Transaction Type', 36 sy-vline,
37 'Business Trans.Cat', 56 sy-vline,
57 'Created On', 68 sy-vline,
69(10) 'Changed On', 84 sy-vline,
85 'Delivery date', 99 sy-vline,
100 'Final Entry Ind', 115 sy-vline,
116 'Deletion Ind', 129 sy-vline,
130 'Can be Archived', 146 sy-vline.
SET LEFT SCROLL-BOUNDARY COLUMN 19.
ULINE.
$$********************************************************************
$$ DISPLAY DATA AT HEADER LEVEL FOR SHC
$$********************************************************************
*" Sort the SC in Sequence
SORT t_str_sc1 BY object_id.
IF SY-SUBRC = 0.
*" Displaying the Report at Header level
LOOP AT t_str_sc1 ASSIGNING <FS_STR1>.
IF NOT <FS_STR1>-archiving_flag IS INITIAL.
FORMAT COLOR 7.
ELSE.
FORMAT COLOR 3.
ENDIF.
WRITE:/3 <FS_STR1>-object_id, 18 sy-vline,
19 <FS_STR1>-process_type, 36 sy-vline,
37 <FS_STR1>-object_type, 56 sy-vline,
57 <FS_STR1>-created_at, 68 sy-vline,
69(10) <FS_STR1>-changed_at, 84 sy-vline,
85 <FS_STR1>-deliv_date, 99 sy-vline,
100 <FS_STR1>-final_entry, 115 sy-vline,
116 <FS_STR1>-del_ind, 129 sy-vline,
130 <FS_STR1>-archiving_flag, 146 sy-vline.
ENDLOOP. "end of t_str_sc1 displaying at header level
ENDIF. "End of SY-SUBRC
*ENDCASE.
ENDIF. "End of displaying Transaction type as SHC
*" when Transaction type is CONF
IF <FS_STR1>-process_type EQ 'CONF'.
DESCRIBE TABLE t_str_sc1 LINES w_ld_lines.
w_ld_linesc = w_ld_lines.
CONCATENATE ' TOTAL NO OF RECORDS SELECTED:' w_ld_linesc INTO e_line SEPARATED BY space.
*" Percentage of Archived SC's
W_PER_E_LINE = w_ld_lines.
W_PER_CBA1 = W_COUNT_CBA / W_PER_E_LINE.
W_PER_CBA = W_PER_CBA1 * 100.
*" Displaying Headings for the Report
*" Displaying the total no of records fetched for DB
FORMAT COLOR 7.
WRITE:/9 e_line .
WRITE:/10 'TOTAL NO OF FINAL ENTRIES SELECTED:', w_count_f.
WRITE:/10 'TOTAL NO OF DELETE ENTRIES SELECTED:', W_DEL_COUNT.
WRITE:/10 'TOTAL NO OF ENTRIES SELECTED FOR ARCHIVING:',w_COUNT_cba.
SKIP.
WRITE:/10 'PERCENTAGE OF CAN BE ARCHIVED:',W_PER_CBA,'%'.
FORMAT COLOR 3.
SKIP.
WRITE:/30 '#### SC HAVING FINAL ENTRY INDICATOR FOR ALL ITEM IN SRM #####'.
FORMAT COLOR OFF.
WRITE:/30(63) SY-ULINE.
ULINE.
NEW-LINE SCROLLING.
WRITE:/3 'Transaction No', 18 sy-vline,
19 'Transaction Type', 36 sy-vline,
37 'Business Trans.Cat', 56 sy-vline,
57 'Created On', 68 sy-vline,
69(10) 'Changed On', 84 sy-vline,
85 'Delivery date', 99 sy-vline,
100 'Final Entry Ind', 115 sy-vline,
100 'Deletion Ind', 112 sy-vline,
113 'Can be Archived', 129 sy-vline.
SET LEFT SCROLL-BOUNDARY COLUMN 19.
ULINE.
*$$********************************************************************
*$$ DISPLAY DATA AT HEADER LEVEL
*$$********************************************************************
*" Sort the SC in Sequence
SORT t_str_sc1 BY object_id.
IF SY-SUBRC = 0.
*" Displaying the Report at Header level
LOOP AT t_str_sc1 ASSIGNING <FS_STR1>.
IF NOT <FS_STR1>-archiving_flag IS INITIAL.
FORMAT COLOR 7.
ELSE.
FORMAT COLOR 3.
ENDIF.
WRITE:/3 <FS_STR1>-object_id, 18 sy-vline,
19 <FS_STR1>-process_type, 36 sy-vline,
37 <FS_STR1>-object_type, 56 sy-vline,
57 <FS_STR1>-created_at, 68 sy-vline,
69(10) <FS_STR1>-changed_at, 84 sy-vline,
85 <FS_STR1>-deliv_date, 99 sy-vline,
100 <FS_STR1>-final_entry, 115 sy-vline,
100 <FS_STR1>-del_ind, 112 sy-vline,
113 <FS_STR1>-archiving_flag, 129 sy-vline.
ENDLOOP. "end of t_str_sc1 displaying
ENDIF. "End of SY-SUBRC
ENDIF. "End of displaying Transaction type as CONF
<b><u>
THE LOGIC FOR THE SECOND REPORT</u></b>
LOOP AT t_str_sc1 ASSIGNING <FS_STR1>.
REFRESH t_str_sc2.
SELECT a~guid
a~object_id
a~object_type
a~process_type
a~created_at
a~changed_at
a~archiving_flag
b~guid
b~header
c~guid
c~deliv_date
c~final_entry
c~del_ind
INTO TABLE t_str_sc2
FROM crmd_orderadm_h AS a INNER JOIN crmd_orderadm_i AS b
ON aguid eq bheader INNER JOIN bbp_pdigp AS c
ON bguid eq cguid
WHERE a~guid eq <FS_STR1>-guid.
IF NOT t_str_sc2[] is INITIAL.
LOOP AT T_STR_SC2 ASSIGNING <FS_STR2>.
IF <FS_STR2>-DEL_IND NE C_DEL_SPACE. " if x
<FS_STR2>-DEL_IND = C_DEL_IND_F.
<FS_STR2>-ARCHIVING_FLAG = C_ARCHIVING_FLAG.
MODIFY T_STR_SC2 FROM <FS_STR2> .
ELSE. "if ' '
EXIT.
ENDIF.
ENDLOOP. "End loop of t_str_sc2
MOVE <FS_STR2>-DEL_IND TO <FS_STR1>-DEL_IND.
MOVE <FS_STR2>-ARCHIVING_FLAG TO <FS_STR1>-ARCHIVING_FLAG.
MODIFY T_STR_SC1 FROM <FS_STR1>.
ELSE. " For sy-subrc
<FS_STR1>-REMARKS = c_itnf.
MODIFY T_STR_SC1 FROM <FS_STR1>.
ENDIF. " End of sy-subrc
IF <FS_STR1>-DEL_IND eq C_DEL_IND_F.
W_DEL_COUNT = W_DEL_COUNT + 1.
ENDIF.
IF <FS_STR1>-ARCHIVING_FLAG eq C_ARCHIVING_FLAG.
w_COUNT_cba = w_COUNT_cba + 1.
ENDIF.
ENDLOOP. "End loop of t_str_sc1
********************************" when Transaction type is CONF
*******************************IF <FS_STR1>-process_type EQ 'CONF'.
DESCRIBE TABLE t_str_sc1 LINES w_ld_lines.
w_ld_linesc = w_ld_lines.
CONCATENATE ' TOTAL NO OF RECORDS SELECTED:' w_ld_linesc INTO e_line SEPARATED BY space.
*" Percentage of Archived SC's
W_PER_E_LINE = w_ld_lines.
W_PER_CBA1 = W_COUNT_CBA / W_PER_E_LINE.
W_PER_CBA = W_PER_CBA1 * 100.
*" Displaying Headings for the Report
*" Displaying the total no of records fetched for DB
FORMAT COLOR 7.
WRITE:/9 e_line .
WRITE:/10 'TOTAL NO OF FINAL ENTRIES SELECTED:', w_count_f.
WRITE:/10 'TOTAL NO OF DELETE ENTRIES SELECTED:', W_DEL_COUNT.
WRITE:/10 'TOTAL NO OF ENTRIES SELECTED FOR ARCHIVING:',w_COUNT_cba.
SKIP.
WRITE:/10 'PERCENTAGE OF CAN BE ARCHIVED:',W_PER_CBA,'%'.
FORMAT COLOR 3.
SKIP.
WRITE:/30 '#### SC HAVING FINAL ENTRY INDICATOR FOR ALL ITEM IN SRM #####'.
FORMAT COLOR OFF.
WRITE:/30(63) SY-ULINE.
ULINE.
NEW-LINE SCROLLING.
WRITE:/3 'Transaction No', 18 sy-vline,
19 'Transaction Type', 36 sy-vline,
37 'Business Trans.Cat', 56 sy-vline,
57 'Created On', 68 sy-vline,
69(10) 'Changed On', 84 sy-vline,
100 'Deletion Ind', 112 sy-vline,
113 'Can be Archived', 129 sy-vline,
130 'Remarks', 150 sy-vline.
SET LEFT SCROLL-BOUNDARY COLUMN 19.
ULINE.
*$$********************************************************************
*$$ DISPLAY DATA AT HEADER LEVEL
*$$********************************************************************
*" Sort the SC in Sequence
SORT t_str_sc1 BY object_id.
IF SY-SUBRC = 0.
*" Displaying the Report at Header level
LOOP AT t_str_sc1 ASSIGNING <FS_STR1>.
IF NOT <FS_STR1>-archiving_flag IS INITIAL.
FORMAT COLOR 7.
ELSE.
FORMAT COLOR 3.
ENDIF.
WRITE:/3 <FS_STR1>-object_id, 18 sy-vline,
19 <FS_STR1>-process_type, 36 sy-vline,
37 <FS_STR1>-object_type, 56 sy-vline,
57 <FS_STR1>-created_at, 68 sy-vline,
69(10) <FS_STR1>-changed_at, 84 sy-vline,
100 <FS_STR1>-del_ind, 112 sy-vline,
113 <FS_STR1>-archiving_flag, 129 sy-vline,
130 <FS_STR1>-REMARKS, 150 sy-vline.
ENDLOOP. "end of t_str_sc1 displaying
ENDIF. "End of SY-SUBRC
**********************ENDIF. "End of displaying Transaction type as CONFVery difficult to give you a solution without having access to the actual data and tables and some basic relationship model to explain the entities of the tables.
But one thing I found that makes dealing complex queries a lot easier - easier to code and to read and to maintain - is to use the WITH clause. This allows the type of modularisation of code that we're using in other languages.
The basic syntax is:WITH <alias1> AS(
SELECT ...
<alias2> AS(
SELECT ...
<aliasn> AS(
SELECT ...
SELECT
FROM alias1, .. aliasnThis allows you to create distinct query sets once - and then re-use these again in joins, selects, and even other sub-sets.
The resulting SQL is a lot les stressful on the eye and makes the whole "processing logic" of getting to the results much easier to analyse, follow and understand. -
Loading complex report data into a direct update DSO using APD
Dear All,
Recently, I had a requirement to download the report data into a direct update DSO using an APD. I was able to perform this easily when the report was simple i.e it has few rows and columns. But I faced problems If the report is a complex one. Summing up, I would like to know how to handle the scenarios in each of the following cases:
1. How should I decide the key fields and data fields of the direct update DSO ? Is it that the elements in ROWS will go to the
key fields of DSO and the remaining to the data fields? Correct me.
2. What if the report contains the Restricted KFs and Calculated KFs? Do I have to create separate infoobjects in the BI
system and then include these in the DSO data fields to accommodate the extracted data ?
3. How do I handle the Free Characteristics and Filters ?
4. Moreover, I observed that if the report contains selection screen variables, then I need to create variants in the report and
use that variant in the APD. So, if I have 10 sets of users executing the same report with different selection conditions, then
shall I need to create 10 different variants and pass those into 10 different APDs, all created for the same report ?
I would appreciate if someone can answer my questions clearly.
Regards,
D. Srinivas RaoHi ,
PFB the answers.
1. How should I decide the key fields and data fields of the direct update DSO ? Is it that the elements in ROWS will go to the
key fields of DSO and the remaining to the data fields? Correct me.
--- Yes , you can use the elements in the ROWS in the Key fields, but in case you get two records with same value in the ROWS element the data load will fail. So you basically need to have one value that would be different for each record.
2. What if the report contains the Restricted KFs and Calculated KFs? Do I have to create separate infoobjects in the BI
system and then include these in the DSO data fields to accommodate the extracted data ?
Yes you would need to create new Infoobjects for the CKF's and RKF's in the Report and include them in your DSO.
3. How do I handle the Free Characteristics and Filters ?
The default filters work in the same way as when you yourself execute the reoprt. But you cannot use the Free characterisitics in the APD. only the ROWS and cloumns element which are in default layout can be used.
4. Moreover, I observed that if the report contains selection screen variables, then I need to create variants in the report and
use that variant in the APD. So, if I have 10 sets of users executing the same report with different selection conditions, then
shall I need to create 10 different variants and pass those into 10 different APDs, all created for the same report ?
--- Yes you would need to create 10 different APD's. Its very simple to create, you can copy an APD. but it would be for sure a maintance issue. you would have to maintain 10 APD's.
Please revert in case of any further queries. -
Please help! Illustrator CS6 started trying to open all of my complex.ai files with a "Text Import Options" box as if they were text files, and they are not opening! Help!
Hi Monika,
I have spent the last two or three days trying to do what you suggested. I uninstalled Adobe 6 from Windows. Some files that CS6 placed on my system during installation remained, including fonts and .dll files.
I had to abandon the Cleaner Tool you suggested because in one screen it allowed me to specify removing CS6 only, but on the following screen it only gave on option to remove ALL Adobe programs. I could not do that because I didn't have the serial number handy for CS3 in case I want to reinstall it at some point.
I tried to get technical help with the Cleaner Tool problem but no definitive help was available, so I reinstalled CS6 again without having the benefit of the Cleaner Tool. I tried to get the serial number for CS3 so I could use the Cleaner Tool but spent 2 wasted hours in chat. Even though I had a customer number, order number, order date, place of purchase, the email address used AND 16 digits of the serial number, in two hours the agent couldn't give me the serial number. After two hours I had nothing but instructions to wait another 20 minutes for a case number.
Illustrator CS6 is still trying to open some backups as Text and otherNone of the problems have been fixed. I have tried to open/use the .ai files in CS6 installed on another system and am getting the same result, so I don't think the software was damaged by the cleaner. The hard drive cleaner is well-known and I've run it many times without any problem to previous versions of Illustrator or any other programs.
When I ordered, the sale rep promised good technical support and gave me an 800 number, but after I paid the $2000, I learned that the 800 number she gave me doesn't support CS6 and hangs up on me. Adobe doesn't call it a current product even though they just sold it to me about 3 weeks ago.
Would appreciate any help you experts can offer. If I can't solve this, the last backup I can use was from June and I will have lost HUNDREDS of hours of work and assets that I cannot replace.
Exhausted and still desperately in need of help... -
Problem with some characters in complex objects
Hi all,
I've built a webservice which returns a complex object with several fields inside. All fields are public and accessable via getter and setter methods.
The problem is, that some of these fields contains numbers or underscores in their names.
For example:
public int field_a;
or
public String house3of4;
When I try to import these webservice as a model in a Web Dynpro project, it doesn't work until I remove these characters.
Is this a known problem or is there any solution for it?
Thanks
ThomasNLS_LANG in registry is "ARABIC_UNITED ARAB EMIRATES.AR8MSWIN1256"
I use oracle form 10g for developer
oracle form 9i for database
when I build a form in client side and make a text with farsi characters, when I run the form,all characters shows me correct in farsi except four characters(گ چ ژ پ) -
Trying to subtract a complex path from another, can't get the result I need
I'd really appreciate some help on this. I'm not familiar with illustrator and am stuck trying to accomplish what I hope is an easy task. I've uploaded my .ai here just in case anyone can help.
I have two layers, one is a collection of lines/paths which make up a line drawing of a tree, then the layer above that is a bunch of paths using a 'distressed' brush. I've read several tutorials on using the pathfinder tool to subtract one from the other but every time I'm left with way more of the tree deleted than I want. I tried a simple test case (subtracting a single distressed path from a rectangle shape) and it worked as expected. I don't know why this one isn't working other than maybe it's just too complex.
On the left (tempoary red background) is what my layers look like, on the right is the effect I'm trying to achieve (the white parts removed from the black parts). I need this to end up being a vector so it can get cut out of a vinyl decal.
Thanks a ton for any help, I'm really stuck on this as I spent 99% of my time in photoshop.That worked perfectly, thank you. I selected everything on the tree layer, made compound path, then did the same on the texture layer. Then the subtract worked great.
Just so I understand what's going on here - make compound path means take all the selected stuff and make it into one single shape/path, vs. having a bunch of separate paths on the same layer?
Really appreciate your help, thanks again.
Maybe you are looking for
-
I'm wondering if there is anyway to change your backup settings through iCloud to only sync the camera roll as opposed to all of the other folders as well. Considering your new pictures automatically save in your camera roll, it would make sense to b
-
i bought my macbook pro on 2013 but last few months its become very slow , is there anyone can help me please ?
-
Problem Consuming webservice -- SOAP UI
Hello Gurus, We have generated a web service from SAP and trying to consume the web service from SOAP UI (external tool), but the SOAP UI tool is generating some error as mentioned below. Exception_Name> CX_ST_MATCH_ELEMENT Exception_Text> System exp
-
How to calculate extended quantity in BOM IMPLOSION QUERY
Hi guys, I have given an assignment to create BOM item where used report, For this purpose I am using an BOM implosion API (PACKAGE.PROCEDURE = BOMPIINQ.IMPLODER_USEREXIT) I am calling this API like this- declare V_SEQUENCE_ID number; v_err_
-
Updated to applypreferences.sh file but the changes did not take effect - h
my user requested to increase the number of rows allow to retrieve when running a report, I made changes to applypreferences.sh file but when closed all session of Discoverer Viewer and open it again, viewing the preferences for this report, the "*Li