Hi, Newbie, I am planning for the following X64 Hardware .. is this sane
Hi All,
I am a newbie, and from what i read in Sun Cluster Hardware guide, i have decided to get the following hardware ( with what i can afford :) )
2 AMD 64 Servers with following configuration :
1 AMD 64 X2 Processor
2 GB Ram
3 SATA Hard drives ( internal )
( Nvidia Gigabit Ethernet nge0 )
(Intel Gigabit Ethernet e1000g0 )
(Intel Quad Ethernet , 4 sockets of 100 MBps each )
And for storage :
and one more old AMD 64 machine with following configuration :
it has 3 Ethernet cards and 4 SATA harddrives ( of 160 GB each ).
What i want to do is, as follows :
Disk 1 Disk 2
| 28 Gig | + | 28 Gig | -----> RAID 0 ( 56 Gig )
| 28 Gig | + | 28 Gig | -----> RAID 0 ( 56 Gig )
| 28 Gig | + | 28 Gig | -----> RAID 0 ( 56 Gig )
| 28 Gig | + | 28 Gig | -----> RAID 0 ( 56 Gig )
| 28 Gig | + | 28 Gig | -----> RAID 0 ( 56 Gig )
======== ========
Disk 3 Disk 4
| 28 Gig | + | 28 Gig | -----> RAID 0 ( 56 Gig )
| 28 Gig | + | 28 Gig | -----> RAID 0 ( 56 Gig )
| 28 Gig | + | 28 Gig | -----> RAID 0 ( 56 Gig )
| 28 Gig | + | 28 Gig | -----> RAID 0 ( 56 Gig )
| 28 Gig | + | 28 Gig | -----> RAID 0 ( 56 Gig )
======== ========
| RAID 0 ( 56 Gig ) | + | RAID 0 ( 56 Gig ) | ----------> RAID 1 56 Gig Mirror ( md1 )
| RAID 0 ( 56 Gig ) | + | RAID 0 ( 56 Gig ) | ----------> RAID 1 56 Gig Mirror ( md2 )
| RAID 0 ( 56 Gig ) | + | RAID 0 ( 56 Gig ) | ----------> RAID 1 56 Gig Mirror ( md3 )
| RAID 0 ( 56 Gig ) | + | RAID 0 ( 56 Gig ) | ----------> RAID 1 56 Gig Mirror ( md4 )
| RAID 0 ( 56 Gig ) | + | RAID 0 ( 56 Gig ) | ----------> RAID 1 56 Gig Mirror ( md5 )
================ ================
And to export all md1,md2,md3,md4,md5 as shared hard drives, through iSCSI.
So both servers , can be iSCSI initiators, and import 5 disk drives. I know performance will be hit , but for learning this should be enough.
NOW Please, tell me if i have any major pitfalls and this setup will not work, because i don't want to bring this and setup this stuff, later to only realize that some thing in this is not supported.
Also, help me on whether, to get two unmanaged Ethernet switch... is this enough .. i am confused here, tell me also how to complete this configuration .
Please help me Cluster guru's , I await for all valuable inputs and guidance on this. also note that i am a sys admin, but have no knowledge of sun clustering, and very less knowledge of clustering in general.
-- Chandan
Message was edited by:
chandan.maddanna
Hey Jakub, but is VMWare is a workable solution ?
i mean if i am able to do a two node full fledged cluster and try out some data service agents, for example with apache, oracle RAC etc .. that that will do....
btw, do you have any idea as of when iSCSI would be supported by sun clusters, i mean have you seen any road map for this, or anyone saying that it will be supported shortly or something like that.
because i am thinking if this is just a matter of days or couple of months that i can wait, because i can really have a performing cluster, to test out all my fancies..
nice speaking to you, Thanks for all your info on this.. I thought i would hardly get any valid inputs. hope to hear from ya soon..
-- Chandan
Similar Messages
-
CBO generating different plans for the same data in similar Environments
Hi All
I have been trying to compare an SQL from 2 different but similar environments build of the same hardware specs .The issue I am facing is environment A, the query executes in less than 2 minutes with plan mostly showing full table scans and hash join whereas in environment B(problematic), it times out after 2 hours with an error of unable to extend table space . The statistics are up to date in both environments for both tables and indexes . System parameters are exactly similar(default oracle for except for multiblock_read_count ).
Both Environment have same db parameter for db_file_multiblock_read_count(16), optimizer(refer below),hash_area_size (131072),pga_aggregate_target(1G),db_block_size(8192) etc . SREADTIM, MREADTIM, CPUSPEED, MBRC are all null in aux_stats in both environment because workload was never collected i believe.
Attached is details about the SQL with table stats, SQL and index stats my main concern is CBO generating different plans for the similar data and statistics and same hardware and software specs. Is there any thing else I should consider .I generally see environment B being very slow and always plans tend to nested loops and index scan whereas what we really need is a sensible FTS in many cases. One of the very surprising thing is METER_CONFIG_HEADER below which has just 80 blocks of data is being asked for index scan.
show parameter optimizer
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
**Environment**
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
Note: : There are slight difference in the no of records in the attached sheet.However, I wanted to tell that i have tested with exact same data and was getting similar results but I couldn't retain the data untill collecting the details in the attachment
TEST1 COMPARE TABLE LEVE STATS used by CBO
ENVIRONMENT A
TABLE_NAME NUM_ROWS BLOCKS LAST_ANALYZED
ASSET 3607425 167760 5/02/2013 22:11
METER_CONFIG_HEADER 3658 80 5/01/2013 0:07
METER_CONFIG_ITEM 32310 496 5/01/2013 0:07
NMI 1899024 33557 18/02/2013 10:55
REGISTER 4830153 101504 18/02/2013 9:57
SDP_LOGICAL_ASSET 1607456 19137 18/02/2013 15:48
SDP_LOGICAL_REGISTER 5110781 78691 18/02/2013 9:56
SERVICE_DELIVERY_POINT 1425890 42468 18/02/2013 13:54
ENVIRONMENT B
TABLE_NAME NUM_ROWS BLOCKS LAST_ANALYZED
ASSET 4133939 198570 16/02/2013 10:02
METER_CONFIG_HEADER 3779 80 16/02/2013 10:55
METER_CONFIG_ITEM 33720 510 16/02/2013 10:55
NMI 1969000 33113 16/02/2013 10:58
REGISTER 5837874 120104 16/02/2013 11:05
SDP_LOGICAL_ASSET 1788152 22325 16/02/2013 11:06
SDP_LOGICAL_REGISTER 6101934 91088 16/02/2013 11:07
SERVICE_DELIVERY_POINT 1447589 43804 16/02/2013 11:11
TEST ITEM 2 COMPARE INDEX STATS used by CBO
ENVIRONMENT A
TABLE_NAME INDEX_NAME UNIQUENESS BLEVEL LEAF_BLOCKS DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR NUM_ROWS
ASSET IDX_AST_DEVICE_CATEGORY_SK NONUNIQUE 2 9878 67 147 12982 869801 3553095
ASSET IDX_A_SAPINTLOGDEV_SK NONUNIQUE 2 7291 2747 2 639 1755977 3597916
ASSET SYS_C00102592 UNIQUE 2 12488 3733831 1 1 3726639 3733831
METER_CONFIG_HEADER SYS_C0092052 UNIQUE 1 12 3670 1 1 3590 3670
METER_CONFIG_ITEM SYS_C0092074 UNIQUE 1 104 32310 1 1 32132 32310
NMI IDX_NMI_ID NONUNIQUE 2 6298 844853 1 2 1964769 1965029
NMI IDX_NMI_ID_NK NONUNIQUE 2 6701 1923072 1 1 1922831 1923084
NMI IDX_NMI_STATS NONUNIQUE 1 106 4 26 52 211 211
REGISTER REG_EFFECTIVE_DTM NONUNIQUE 2 12498 795 15 2899 2304831 4711808
REGISTER SYS_C00102653 UNIQUE 2 16942 5065660 1 1 5056855 5065660
SDP_LOGICAL_ASSET IDX_SLA_SAPINTLOGDEV_SK NONUNIQUE 2 3667 1607968 1 1 1607689 1607982
SDP_LOGICAL_ASSET IDX_SLA_SDP_SK NONUNIQUE 2 3811 668727 1 2 1606204 1607982
SDP_LOGICAL_ASSET SYS_C00102665 UNIQUE 2 5116 1529606 1 1 1528136 1529606
SDP_LOGICAL_REGISTER SYS_C00102677 UNIQUE 2 17370 5193638 1 1 5193623 5193638
SERVICE_DELIVERY_POINT IDX_SDP_NMI_SK NONUNIQUE 2 4406 676523 1 2 1423247 1425890
SERVICE_DELIVERY_POINT IDX_SDP_SAP_INT_NMI_SK NONUNIQUE 2 7374 676523 1 2 1458238 1461108
SERVICE_DELIVERY_POINT SYS_C00102687 UNIQUE 2 4737 1416207 1 1 1415022 1416207
ENVIRONMENT B
TABLE_NAME INDEX_NAME UNIQUENESS BLEVEL LEAF_BLOCKS DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY CLUSTERING_FACTOR NUM_ROWS
ASSET IDX_AST_DEVICE_CATEGORY_SK NONUNIQUE 2 8606 121 71 16428 1987833 4162257
ASSET IDX_A_SAPINTLOGDEV_SK NONUNIQUE 2 8432 1780146 1 1 2048170 4162257
ASSET SYS_C00116157 UNIQUE 2 13597 4162263 1 1 4158759 4162263
METER_CONFIG_HEADER SYS_C00116570 UNIQUE 1 12 3779 1 1 3734 3779
METER_CONFIG_ITEM SYS_C00116592 UNIQUE 1 107 33720 1 1 33459 33720
NMI IDX_NMI_ID NONUNIQUE 2 6319 683370 1 2 1970460 1971313
NMI IDX_NMI_ID_NK NONUNIQUE 2 6597 1971293 1 1 1970771 1971313
NMI IDX_NMI_STATS NONUNIQUE 1 98 48 2 4 196 196
REGISTER REG_EFFECTIVE_DTM NONUNIQUE 2 15615 1273 12 2109 2685924 5886582
REGISTER SYS_C00116748 UNIQUE 2 19533 5886582 1 1 5845565 5886582
SDP_LOGICAL_ASSET IDX_SLA_SAPINTLOGDEV_SK NONUNIQUE 2 4111 1795084 1 1 1758441 1795130
SDP_LOGICAL_ASSET IDX_SLA_SDP_SK NONUNIQUE 2 4003 674249 1 2 1787987 1795130
SDP_LOGICAL_ASSET SYS_C004520 UNIQUE 2 5864 1795130 1 1 1782147 1795130
SDP_LOGICAL_REGISTER SYS_C004539 UNIQUE 2 20413 6152850 1 1 6073059 6152850
SERVICE_DELIVERY_POINT IDX_SDP_NMI_SK NONUNIQUE 2 3227 660649 1 2 1422572 1447803
SERVICE_DELIVERY_POINT IDX_SDP_SAP_INT_NMI_SK NONUNIQUE 2 6399 646257 1 2 1346948 1349993
SERVICE_DELIVERY_POINT SYS_C00128706 UNIQUE 2 4643 1447946 1 1 1442796 1447946
TEST ITEM 3 COMPARE PLANS
ENVIRONMENT A
Plan hash value: 4109575732
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 13 | 2067 | | 135K (2)| 00:27:05 |
| 1 | HASH UNIQUE | | 13 | 2067 | | 135K (2)| 00:27:05 |
|* 2 | HASH JOIN | | 13 | 2067 | | 135K (2)| 00:27:05 |
|* 3 | HASH JOIN | | 6 | 900 | | 135K (2)| 00:27:04 |
|* 4 | HASH JOIN ANTI | | 1 | 137 | | 135K (2)| 00:27:03 |
|* 5 | TABLE ACCESS BY INDEX ROWID| NMI | 1 | 22 | | 5 (0)| 00:00:01 |
| 6 | NESTED LOOPS | | 1 | 131 | | 95137 (2)| 00:19:02 |
|* 7 | HASH JOIN | | 1 | 109 | | 95132 (2)| 00:19:02 |
|* 8 | TABLE ACCESS FULL | ASSET | 36074 | 1021K| | 38553 (2)| 00:07:43 |
|* 9 | HASH JOIN | | 90361 | 7059K| 4040K| 56578 (2)| 00:11:19 |
|* 10 | HASH JOIN | | 52977 | 3414K| 2248K| 50654 (2)| 00:10:08 |
|* 11 | HASH JOIN | | 39674 | 1782K| | 40101 (2)| 00:08:02 |
|* 12 | TABLE ACCESS FULL | REGISTER | 39439 | 1232K| | 22584 (2)| 00:04:32 |
|* 13 | TABLE ACCESS FULL | SDP_LOGICAL_REGISTER | 4206K| 56M| | 17490 (2)| 00:03:30 |
|* 14 | TABLE ACCESS FULL | SERVICE_DELIVERY_POINT | 675K| 12M| | 9412 (2)| 00:01:53 |
|* 15 | TABLE ACCESS FULL | SDP_LOGICAL_ASSET | 1178K| 15M| | 4262 (2)| 00:00:52 |
|* 16 | INDEX RANGE SCAN | IDX_NMI_ID_NK | 2 | | | 2 (0)| 00:00:01 |
| 17 | VIEW | | 39674 | 232K| | 40101 (2)| 00:08:02 |
|* 18 | HASH JOIN | | 39674 | 1046K| | 40101 (2)| 00:08:02 |
|* 19 | TABLE ACCESS FULL | REGISTER | 39439 | 500K| | 22584 (2)| 00:04:32 |
|* 20 | TABLE ACCESS FULL | SDP_LOGICAL_REGISTER | 4206K| 56M| | 17490 (2)| 00:03:30 |
|* 21 | TABLE ACCESS FULL | METER_CONFIG_HEADER | 3658 | 47554 | | 19 (0)| 00:00:01 |
|* 22 | TABLE ACCESS FULL | METER_CONFIG_ITEM | 7590 | 68310 | | 112 (2)| 00:00:02 |
Predicate Information (identified by operation id):
2 - access("METER_CONFIG_HEADER_SK"="METER_CONFIG_HEADER_SK")
3 - access("NETWORK_TARIFF_CD"="NETWORK_TARIFF_CD")
4 - access("SERVICE_DELIVERY_POINT_SK"="TMP"."SERVICE_DELIVERY_POINT_SK")
5 - filter("ROW_CURRENT_IND"='Y' AND ("NMI_STATUS_CD"='A' OR "NMI_STATUS_CD"='D'))
7 - access("ASSET_CD"="EQUIP_CD" AND "SAP_INT_LOG_DEVICE_SK"="SAP_INT_LOG_DEVICE_SK")
8 - filter("ROW_CURRENT_IND"='Y')
9 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK")
10 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK")
11 - access("SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")
12 - filter("REGISTER_TYPE_CD"='C' AND (SUBSTR("REGISTER_ID_CD",1,1)='4' OR
SUBSTR("REGISTER_ID_CD",1,1)='5' OR SUBSTR("REGISTER_ID_CD",1,1)='6') AND "ROW_CURRENT_IND"='Y')
13 - filter("ROW_CURRENT_IND"='Y')
14 - filter("ROW_CURRENT_IND"='Y')
15 - filter("ROW_CURRENT_IND"='Y')
16 - access("NMI_SK"="NMI_SK")
18 - access("SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")
19 - filter("REGISTER_TYPE_CD"='C' AND (SUBSTR("REGISTER_ID_CD",1,1)='1' OR
SUBSTR("REGISTER_ID_CD",1,1)='2' OR SUBSTR("REGISTER_ID_CD",1,1)='3') AND "ROW_CURRENT_IND"='Y')
20 - filter("ROW_CURRENT_IND"='Y')
21 - filter("ROW_CURRENT_IND"='Y')
22 - filter("ROW_CURRENT_IND"='Y' AND "CONROL_REGISTER"='X')
ENVIRONMENT B
Plan hash value: 2826260434
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 181 | 103K (2)| 00:20:47 |
| 1 | HASH UNIQUE | | 1 | 181 | 103K (2)| 00:20:47 |
|* 2 | HASH JOIN ANTI | | 1 | 181 | 103K (2)| 00:20:47 |
|* 3 | HASH JOIN | | 1 | 176 | 56855 (2)| 00:11:23 |
|* 4 | HASH JOIN | | 1 | 163 | 36577 (2)| 00:07:19 |
|* 5 | TABLE ACCESS BY INDEX ROWID | ASSET | 1 | 44 | 4 (0)| 00:00:01 |
| 6 | NESTED LOOPS | | 1 | 131 | 9834 (2)| 00:01:59 |
| 7 | NESTED LOOPS | | 1 | 87 | 9830 (2)| 00:01:58 |
| 8 | NESTED LOOPS | | 1 | 74 | 9825 (2)| 00:01:58 |
|* 9 | HASH JOIN | | 1 | 52 | 9820 (2)| 00:01:58 |
|* 10 | TABLE ACCESS BY INDEX ROWID| METER_CONFIG_HEADER | 1 | 14 | 1 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 33 | 116 (2)| 00:00:02 |
|* 12 | TABLE ACCESS FULL | METER_CONFIG_ITEM | 1 | 19 | 115 (2)| 00:00:02 |
|* 13 | INDEX RANGE SCAN | SYS_C00116570 | 1 | | 1 (0)| 00:00:01 |
|* 14 | TABLE ACCESS FULL | SERVICE_DELIVERY_POINT | 723K| 13M| 9699 (2)| 00:01:57 |
|* 15 | TABLE ACCESS BY INDEX ROWID | NMI | 1 | 22 | 5 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | IDX_NMI_ID_NK | 2 | | 2 (0)| 00:00:01 |
|* 17 | TABLE ACCESS BY INDEX ROWID | SDP_LOGICAL_ASSET | 1 | 13 | 5 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | IDX_SLA_SDP_SK | 2 | | 2 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | IDX_A_SAPINTLOGDEV_SK | 2 | | 2 (0)| 00:00:01 |
|* 20 | TABLE ACCESS FULL | REGISTER | 76113 | 2378K| 26743 (2)| 00:05:21 |
|* 21 | TABLE ACCESS FULL | SDP_LOGICAL_REGISTER | 5095K| 63M| 20245 (2)| 00:04:03 |
| 22 | VIEW | | 90889 | 443K| 47021 (2)| 00:09:25 |
|* 23 | HASH JOIN | | 90889 | 2307K| 47021 (2)| 00:09:25 |
|* 24 | TABLE ACCESS FULL | REGISTER | 76113 | 966K| 26743 (2)| 00:05:21 |
|* 25 | TABLE ACCESS FULL | SDP_LOGICAL_REGISTER | 5095K| 63M| 20245 (2)| 00:04:03 |
Predicate Information (identified by operation id):
2 - access("SERVICE_DELIVERY_POINT_SK"="TMP"."SERVICE_DELIVERY_POINT_SK")
3 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK" AND
"SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")
4 - access("ASSET_CD"="EQUIP_CD")
5 - filter("ROW_CURRENT_IND"='Y')
9 - access("NETWORK_TARIFF_CD"="NETWORK_TARIFF_CD")
10 - filter("ROW_CURRENT_IND"='Y')
12 - filter("ROW_CURRENT_IND"='Y' AND "CONROL_REGISTER"='X')
13 - access("METER_CONFIG_HEADER_SK"="METER_CONFIG_HEADER_SK")
14 - filter("ROW_CURRENT_IND"='Y')
15 - filter("ROW_CURRENT_IND"='Y' AND ("NMI_STATUS_CD"='A' OR "NMI_STATUS_CD"='D'))
16 - access("NMI_SK"="NMI_SK")
17 - filter("ROW_CURRENT_IND"='Y')
18 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK")
19 - access("SAP_INT_LOG_DEVICE_SK"="SAP_INT_LOG_DEVICE_SK")
20 - filter((SUBSTR("REGISTER_ID_CD",1,1)='4' OR SUBSTR("REGISTER_ID_CD",1,1)='5' OR
SUBSTR("REGISTER_ID_CD",1,1)='6') AND "REGISTER_TYPE_CD"='C' AND "ROW_CURRENT_IND"='Y')
21 - filter("ROW_CURRENT_IND"='Y')
23 - access("SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")
24 - filter((SUBSTR("REGISTER_ID_CD",1,1)='1' OR SUBSTR("REGISTER_ID_CD",1,1)='2' OR
SUBSTR("REGISTER_ID_CD",1,1)='3') AND "REGISTER_TYPE_CD"='C' AND "ROW_CURRENT_IND"='Y')
25 - filter("ROW_CURRENT_IND"='Y')Edited by: abhilash173 on Feb 24, 2013 9:16 PM
Edited by: abhilash173 on Feb 24, 2013 9:18 PMHi Paul,
I misread your question initially .The system stats are outdated in both ( same result as seen from aux_stats) .I am not a DBA and do not have access to gather system stats fresh.
select * from sys.aux_stats$
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS NULL COMPLETED
SYSSTATS_INFO DSTART NULL 02-16-2011 15:24
SYSSTATS_INFO DSTOP NULL 02-16-2011 15:24
SYSSTATS_INFO FLAGS 1 NULL
SYSSTATS_MAIN CPUSPEEDNW 1321.20523 NULL
SYSSTATS_MAIN IOSEEKTIM 10 NULL
SYSSTATS_MAIN IOTFRSPEED 4096 NULL
SYSSTATS_MAIN SREADTIM NULL NULL
SYSSTATS_MAIN MREADTIM NULL NULL
SYSSTATS_MAIN CPUSPEED NULL NULL
SYSSTATS_MAIN MBRC NULL NULL
SYSSTATS_MAIN MAXTHR NULL NULL
SYSSTATS_MAIN SLAVETHR NULL NULL -
Function modules for the following
Hi,
I want to know the function modules for the following purposes.
1) Check whether a date is valid or not
2) Calculate the no of days between two dates.
Expecting an early response.
Thanks n Regards,
AmitHi,
PARAMETERS: p_list LIKE t009b-bumon AS LISTBOX
VISIBLE LENGTH 11 OBLIGATORY ,
p_list1 LIKE t009b-bdatj OBLIGATORY.
SELECTION-SCREEN POSITION POS_HIGH.
PARAMETERS: p_list2 LIKE t009b-bumon AS LISTBOX
VISIBLE LENGTH 11 OBLIGATORY,
p_list3 LIKE t009b-bdatj OBLIGATORY.
Calling Function Module for calculating no of days between the
selected period
CALL FUNCTION 'NUMBER_OF_DAYS_PER_MONTH_GET'
EXPORTING
par_month = p_list
par_year = p_list1
IMPORTING
par_days = ws_n_days.
CALL FUNCTION 'NUMBER_OF_DAYS_PER_MONTH_GET'
EXPORTING
par_month = p_list2
par_year = p_list3
IMPORTING
par_days = ws_n_days1.
For Concatenating the month and year into the date format
CONCATENATE p_list1 p_list ws_i INTO ws_c_date1.
CONCATENATE p_list3 p_list2 ws_n_days1 INTO ws_c_date2.
date = ws_c_date1 - ws_c_date2.
In the above sample code the selection screen has month and year as input.
Also check this link
http://www.sapdevelopment.co.uk/tips/date/datehome.htm
Check FM
<b>RP_CALC_DATE_IN_INTERVAL</b> Add/subtract years/months/days from a date
<b>SD_DATETIME_DIFFERENCE</b> Give the difference in Days and Time for 2 dates
<b>Also for checking valid date:</b>
U can specify the date field as
Select-options: s_date like likp-date(similar to ur requirement)
This itself ceck for the valid date no seperate validation needed.
Thanks & Regards,
Judith. -
I'm stuck here trying to figure this error out.
2003 domain, 2012 hyper v core 3 nodes. (I have two of these hyper V groups, hvclust2012 is the problem group, hvclust2008 is okay)
In Failover Cluster Manager I see these errors, "Cluster network name resource 'Cluster Name' failed registration of one or more associated DNS name(s) for the following reason: The handle is invalid."
I restarted the host node that was listed in having the error then another node starts showing the errors.
I tried to follow this site: http://blog.subvertallmedia.com/2012/12/06/repairing-a-failover-cluster-in-windows-server-2012-live-migration-fails-dns-cluster-name-errors/
Then this error shows up when doing the repair: there was an error repairing the active directory object for 'Cluster Name'
I looked at our domain controller and noticed I don't have access to local users and groups. I can access our other hvclust2008 (both clusters are same version 2012).
<image here>
I came upon this thread: http://social.technet.microsoft.com/Forums/en-US/85fc2ad5-b0c0-41f0-900e-df1db8625445/windows-2012-cluster-resource-name-fails-dns-registration-evt-1196?forum=winserverClustering
Now, I'm stuck on adding a managed service account (mas). I'm not sure if I'm way off track to fix this. Any advice? Thanks in advance!
<image here>Thanks Elton,
I restarted 3 hosts after applying the hotfix. Then I did the steps below and got stuck on step 5. That is when I get the error (image above). There
was an error repairing the active directory object for 'Cluster Name'. For more data, see 'Information Details'.
To reset the password on the affected name resource, perform the following steps:
From Failover Cluster Manager, locate the name resource.
Right-click on the resource, and click Properties.
On the Policies tab, select If resource fails, do not restart, and then click OK.
Right-click on the resource, click More Actions, and then click Simulate Failure.
When the name resource shows "Failed," right-click on the resource, click More Actions, and then click Repair.
After the name resource is online, right-click on the resource, and then click Properties.
On the Policies tab, select If resource fails, attempt restart on current node, and then click OK.
Thanks -
After I make minor changes to my collections, the following message appears :Sorry, this page is currently being edited. Please try again later. It sometimes takes hours for me to be able to edit site again even if the only thing I'm changing is a description.
Greetings;
Take a look at this related discussion. Others have been experiencing the same issue. I hope that this discussion will help you resolve your issue. All the best...
https://discussions.apple.com/message/17781516#17781516
Syd Rodocker
iTunes U Administrator
Tennessee State Department of Education
Tennessee's Electronic Learning Center -
14th May 2013.
We placed an order on medIT (Medit Information Technology) for the following items:
6 units of Apple iMAc-All_in-one-1 core i5 2.9 GHz-RAM 8GB-HDD 1x1tb-GF GTX 660M- Gigabit LAN-WLAN : 802.11 a/b/g/n, Bluetooth 4.0-OS x 10.8 Mountain Lion- Monitor : LED 27”wide- keyboard: English.
We paid to their account; Bank of Cyprus Public Company Limited the full invoice value.
As of today 14th May 2013 we are still waiting for delivery of the goods despite numerous advices that a partial order of 3 items the order has been shipped.
We have made numerous follow up calls to medIT directly and through other customers of medIT, but we have no clue as to when we will receive the goods. On 29th April, 2013 they gave us a FEDEX airwaybill no..We regret that we have not been able to trace this airwaybill on the FEDEX system.
We are, therefore, pleading for assistance from any support group to link us with the right department so that we can have accurate information on the movement of this parcel.
These items were ordered for a government department of the Republic of Zambia and this non-delivery is creating a bad name for Apple Resellers covering Zambian market. The project upgrade of the project is now falling behind due to non-delivery on our part arising from the non-performance on the part of medIT.
We can make available for inspection copies of the order and remittance details to the appropriate office for review and ready reference.
We can be contacted directly on “James A. Ngoma”
<Personal Information Edited by Host>If you ordered from MedIT in Cyprus, they do not appear to be an authorized Apple reseller. If they are indeed not authorized by Apple, there will be nothing Apple will be able to do to assist you in this matter. You will need to work the problem out with MedIT, obtaining legal assistance in Cyprus if necessary.
Regards. -
Variable Substitution need to define pay load for the following structure
Hi All
Please help me for defining the pay load for the following structure for the Variable Substitution
for genearing the file dynamically fro the payload
Target structure is like this
MT_RFQ_IND_IDOC_MYSPACE_TARGET............> my message type
<HEADER>
< FileName>
<INDI>
<RFQNO>
<DOCUTYPE>
< ITEM>
<FEILD1>
<FEILD2>
<FEILD2>
please help me
thanking you
SridharHi,
should this var1 given in any data type of my IR - No
in variable substitution, in value give the complete thing which i said above in italics i.e. payload:MT_RFQ_IND_IDOC_FILE_TARGET,1,hEADER,1,FileName,1
which user you used for CPACache refresh - it can only be done by XIDIRUSER.
Regards,
Rajeev Gupta
Edited by: RAJEEV GUPTA on Feb 6, 2009 7:34 AM
Edited by: RAJEEV GUPTA on Feb 6, 2009 7:35 AM -
Need solution for the following sync error: "iTunes could not sync calendars to the iPad "iPad name" because an error occurred while sending data from the iPad"
I want to add that I deleted all the old back-ups and created a new back-up without any issues except sync problem.
-
Multiple Executions Plans for the same SQL statement
Dear experts,
awrsqrpt.sql is showing multiple executions plans for a single SQL statement. How is it possible that one SQL statement will have multiple Executions Plans within the same AWR report.
Below is the awrsqrpt's output for your reference.
WORKLOAD REPOSITORY SQL Report
Snapshot Period Summary
DB Name DB Id Instance Inst Num Release RAC Host
TESTDB 2157605839 TESTDB1 1 10.2.0.3.0 YES testhost1
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 32541 11-Oct-08 21:00:13 248 141.1
End Snap: 32542 11-Oct-08 21:15:06 245 143.4
Elapsed: 14.88 (mins)
DB Time: 12.18 (mins)
SQL Summary DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
Elapsed
SQL Id Time (ms)
51szt7b736bmg 25,131
Module: SQL*Plus
UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL,0) + NVL(ACCT_DR_BAL,
0)) FROM ACCT WHERE ACCT_TRN_DT = (:B1 ) AND TEST_ACC_NB = ACCT_ACC_NB(+)) WHERE
TEST_BATCH_DT = (:B1 )
SQL ID: 51szt7b736bmg DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
-> 1st Capture and Last Capture Snap IDs
refer to Snapshot IDs witin the snapshot range
-> UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL,0) + NVL(AC...
Plan Hash Total Elapsed 1st Capture Last Capture
# Value Time(ms) Executions Snap ID Snap ID
1 2960830398 25,131 1 32542 32542
2 3834848140 0 0 32542 32542
Plan 1(PHV: 2960830398)
Plan Statistics DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 25,131 25,130.7 3.4
CPU Time (ms) 23,270 23,270.2 3.9
Executions 1 N/A N/A
Buffer Gets 2,626,166 2,626,166.0 14.6
Disk Reads 305 305.0 0.3
Parse Calls 1 1.0 0.0
Rows 371,735 371,735.0 N/A
User I/O Wait Time (ms) 564 N/A N/A
Cluster Wait Time (ms) 0 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 0 N/A N/A
Invalidations 0 N/A N/A
Version Count 2 N/A N/A
Sharable Mem(KB) 26 N/A N/A
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | | | 1110 (100)| |
| 1 | UPDATE | TEST | | | | |
| 2 | TABLE ACCESS FULL | TEST | 116K| 2740K| 1110 (2)| 00:00:14 |
| 3 | TABLE ACCESS BY INDEX ROWID| ACCT | 1 | 26 | 5 (0)| 00:00:01 |
| 4 | INDEX RANGE SCAN | ACCT_DT_ACC_IDX | 1 | | 4 (0)| 00:00:01 |
Plan 2(PHV: 3834848140)
Plan Statistics DB/Inst: TESTDB/TESTDB1 Snaps: 32541-32542
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 0 N/A 0.0
CPU Time (ms) 0 N/A 0.0
Executions 0 N/A N/A
Buffer Gets 0 N/A 0.0
Disk Reads 0 N/A 0.0
Parse Calls 0 N/A 0.0
Rows 0 N/A N/A
User I/O Wait Time (ms) 0 N/A N/A
Cluster Wait Time (ms) 0 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 0 N/A N/A
Invalidations 0 N/A N/A
Version Count 2 N/A N/A
Sharable Mem(KB) 26 N/A N/A
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | | | 2 (100)| |
| 1 | UPDATE | TEST | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 28 | 2 (0)| 00:00:01 |
| 3 | INDEX RANGE SCAN | TEST_DT_IND | 1 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| ACCT | 1 | 26 | 4 (0)| 00:00:01 |
| 5 | INDEX RANGE SCAN | INDX_ACCT_DT | 1 | | 3 (0)| 00:00:01 |
Full SQL Text
SQL ID SQL Text
51szt7b736bm UPDATE TEST SET TEST_TRN_DAY_CL = (SELECT (NVL(ACCT_CR_BAL, 0) +
NVL(ACCT_DR_BAL, 0)) FROM ACCT WHERE ACCT_TRN_DT = (:B1 ) AND PB
RN_ACC_NB = ACCT_ACC_NB(+)) WHERE TEST_BATCH_DT = (:B1 )Your input is highly appreciated.
Thanks for taking your time in answering my question.
RegardsOracle Lover3 wrote:
Dear experts,
awrsqrpt.sql is showing multiple executions plans for a single SQL statement. How is it possible that one SQL statement will have multiple Executions Plans within the same AWR report.If you're using bind variables and you've histograms on your columns which can be created by default in 10g due to the "SIZE AUTO" default "method_opt" parameter of DBMS_STATS.GATHER__STATS it is quite normal that you get different execution plans for the same SQL statement. Depending on the values passed when the statement is hard parsed (this feature is called "bind variable peeking" and enabled by default since 9i) an execution plan is determined and re-used for all further executions of the same "shared" SQL statement.
If now your statement ages out of the shared pool or is invalidated due to some DDL or statistics gathering activity it will be re-parsed and again the values passed in that particular moment will determine the execution plan. If you have skewed data distribution and a histogram in place that reflects that skewness you might get different execution plans depending on the actual values used.
Since this "flip-flop" behaviour can sometimes be counter-productive if you're unlucky and the values used to hard parse the statement leading to a plan that is unsuitable for the majority of values used afterwards, 11g introduced the "adaptive" cursor sharing that attempts to detect such a situation and can automatically re-evaluate the execution plan of the statement.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Is it necessary to create a Billing Plan for the Service Contracts
Hi All,
Is it necessary to create a billing plan for the service contracts.
My requirement.
In CRM 7.0, we created a service contract. For the item in that service contract there is a Billing Plan tab.
In that tab we have the dates for Period, Billing Date etc.
Now our requirement is to get the Billing Date quarterly. For this we created date rules and date profiles and assigned these date profiles to header and item level transactions.
And we are able to see the date rules under the drop down of the billing date.
I copied the standard date rule BILL004 to ZBILL004.
But when we changed to our date rule it is not changing to quarterly date.
Can you please let me know what are the configurations to be done for this.
Is the billing plan has to be created for this kind of scenario.
Thanks in advance
Thanks and Regards,
RaghuHi
On OK button's action you can Destroy the window and navigate to Page1.
Go through this link for more details and a step by step guide for creation of pop ups and dialog boxes
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/903fed0d-7be4-2a10-cd96-9136707374e1?quicklink=index&overridelayout=true
Hope it works
Regards
Suresh -
Can anyone tell me the field name, table name for the following scenario?
Hi All,
Can anyone tell me the field name and the respective table name for the following scenario's?
1. A report to list all the materials to which Invoice is done and delivery is pending.
2. A report for Order tracking.
3. A report, which gives PO(Purchase Order), Pricing details.
4. A report, which calculates the MATERIAL COST. It will select all the material issued for the entered service order number from stock.
Thanks & Regards,
P.Gowrishanker
Edited by: Gowrishanker pari on Jun 1, 2008 8:58 AMI believe that OmniVision Technologies has the contract for the 3gs.
-
Could not save the document to the repository for the following reason
Hi,
Some of the Scheduled reports are getting failed because of the following Error.
Could not save the document to the repository for the following reason: [repo_proxy 30] InfoStore::ObjectExport::commit - (Helpers::InfoStore::ObjectExport::commit) File Repository Server error : Failed to put the file to file server: frs://Output/a_242/027/084/5512178/aduu8vtfkntmvyv8lsnntas.wid already exists.(hr=#0x80042a4a)(Error #-2147210678 (WIS 30567)
i have searched about this but no use no information on the same.
So Pls let me know why this problem will occur and how to resolve this,I am new to this forum if any this is wrong pls let me know
Thanks in advance,HiShwetabh Suman
Thanks a lot for immediate response,
Please go through the following
From the logs I see a difference in the execution steps as below:
[2014-03-31 10:24:15,364] [TID:13] [INFO ] [child.RunStatusTimerTask.run():17]: calling checkRunStatus
[2014-03-31 10:24:31,887] [TID:3168] [INFO ] [child.ChildImpl.request():23]: request: [GetLoad]
[2014-03-31 10:24:31,888] [TID:3168] [INFO ] [child.ChildImpl.request():25]: response: [1]
[2014-03-31 10:24:37,879] [TID:3153] [ERROR] [webi.PublishingSubsystem.run():93]: Throwable exception caught:
com.businessobjects.rebean.wi.ServerException: Could not save the document to the repository for the following reason: [repo_proxy 30] InfoStore::ObjectExport::commit
- (Helpers::InfoStore::ObjectExport::commit) File Repository Server error : Failed to put the file to file server:
frs://Output/a_165/024/084/5511333/aqew_jeqdcjcrukoadjdtk0.wid already exists.(hr=#0x80042a4a)(Error #-2147210678 (WIS 30567)
Can you please help us understand the reason why the child.runstatustimertask.run() is called in the case where we get the failure. Because upon noticing the difference in the steps of execution I see that this method is called and after that the failure is occuring.
Also, few other findings are that :
1. The error occurs from both the nodes adaptive job servers randomly.
2. The IFRS and OFRS of one of the node is not being used at all for any file transfers (though the configurations are exactly same).
Please advise and help us get rid of the said error.
Can u give any permanent solution for us -
Does anyone have a cure for the following error running on Windows, "The procedure entry point sqlite3_wal_checkpoint could not be found located in the dynamic link library SQLite.dll.
I have searched my computer for that file SQlite3.dll and also file QTCF.dll and I cannot find either one when I search my whole computer for it. I cannot fix this! I deleted Itunes and every time I try to download it, it goes through all this download till the end when it says that iTunes.exe-entry point now found and that Procedure entry point message comes up. HELP! This is driving me crazy. How can I get itunes to work again when I can't find the danged .dll file to remove, move or rename!?!?!??
-
Provide the java code for the following scenario.
Hi Experts,
I have tried with all the combinations for this scenario. As per my understanding i require java code for the following scenario
so that it becomes easy........
I require a Message mapping for this Logic.
In the Source there are 4 fields and, the Target side, the fields should appear like this.
Source Structure- File
Record
|-> Header
Order_No
Date
|-> Item
Mat_No
Quantity
Target Structure-IDoc
IDoc
|-> Header
|-> Segment
Delivery_Order_No
Recv_Date
|-> Item
|-> Segment
Delivery_Order_No
Material_Num
Recv_Quantity.
The Logic is for every Order number an IDOC is generated.And if the Material num matches then the quantity should be added. and important note is that the material numbers are different for every order number. That means if a material number is 2 in the order number A. Then the material number can never be 2 in any of the order numbers.Here is the following with an example for the above scenario.
For example:-
we have
Source Structure- File
Order-no Date Mat_No Quantity
1 01/02/2011 A 10
1 01/02/2011 B 15
1 01/02/2011 A 10
2 01/02/2011 C 10
2 01/02/2011 C 10
3 01/02/2011 D 20
3 01/02/2011 D 10
3 01/02/2011 E 25
Target Structure-IDoc
Delivery_Order_No Recv_Date Material_Num Recv_Quantity
1 01/02/2011 A 20
1 01/02/2011 B 15
2 01/02/2011 C 20
3 01/02/2011 D 30
3 01/02/2011 E 25
So for this example total of 5-Idocs created. That means for this example if Order_No is 1 When the Mat_No is A the quantity gets added. For this Scenario 1 IDoc with four Fields 2 in Header(Delivery_Order_No, Recv_Date) and 2 in Item(Material_Num, Recv_Quantity) is generated by adding the quantity field in the Target Side. Similarly if Order_No is 1 when the Mat_No is B then separate IDoc is generated with four Fields 2 in Header(Delivery_Order_No, Recv_Date) and 2 in Item(Material_Num, Recv_Quantity) in the Target Side. Similarly, if Order_No is 2 when the Mat_No is C, an IDoc is generated with four Fields 2 in Header(Delivery_Order_No, Recv_Date) and 2 in Item(Material_Num, Recv_Quantity) by adding the quantity field in the Target Side. ike wise the process goes on upto 3.Kindly do the needy..
Kindly provide the java code.
Thanq very much in advance..what i have understood from ur example is that u want to generate an idoc for unique combination of Order-no and Mat_No
if yes then chk the below mapping..
change the context of Order_No, Date, Mat_No and Quantity to Record (right click-> context)
1)
Order-no
----------------------concat[;]---sort----splitbyvalue(valuechanged)-----collapse context---IDoc
Mat_No
2)
Order-no
--------concat[;]---sort----splitbyvalue(value changed)---collapse context---UDF1--splitbyvalue(each value)--Delivery_Order_No
Mat_No
3)
Order-no
-----------concat[;]---sortbykey----------------------- \
Mat_No / \
Date--------------- / \
----------------------------------------------------------FormatByExample-----collapsecontext---splitbyvalue(each value)----Recv_Date
Order-no /
-----------concat[;]---sort----splitbyvalue(value changed)
Mat_No
4)
Order-no
--------concat[;]---sort----splitbyvalue(value changed)---collapse context-UDF2--splitbyvalue(each value)--Material_Num
Mat_No
5)
Order-no
-----------concat[;]---sortbykey
Mat_No /
Quantity --------------- /
----------------------------------------------------------FormatByExample-----SUM(under statistic)----Recv_Quantity
Order-no
-----------concat[;]---sort----splitbyvalue(value changed)
Mat_No
UDF1:
String [] temp= a.split(";");
return temp[0];
UDF2:
String [] temp= a.split(";");
return temp[1]; -
Hi everybody,
I have a strange problem with Mount-DiskImage command.
Environment: Windows server 2012 without any updates.
All scripts signed as it was in Hanselman's blogpost
http://www.hanselman.com/blog/SigningPowerShellScripts.aspx
First script(script1) executing on one machine (server1), then copy another script(script2) to the remote server(server2) and run script2 in a PS-Session. Both are signed. Certificates are located on both servers.
In a script I tried to
Import-Module Storage
$mountVolume = Mount-DiskImage -ImagePath $ImageSourcePath -PassThru
where ImageSourcePath is a networkpath to iso image.
But getting exception.
Exception Text:
Cannot process Cmdlet Definition XML for the following file:
C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Storage\Disk.cdxml. At line:138 char:17
+ $__cmdletization_objectModelWrapper = Microsoft.PowerShell.Utili ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Executable script code found in signature block.
At line:139 char:17
+ $__cmdletization_objectModelWrapper.Initialize($PSCmdlet, $scrip ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Executable script code found in signature block.
At line:143 char:21
When I look into the C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Storage\Disk.cdxml I didn't get what's happend, because in line 138- was xml comment.
Any ideas?Hi,
I suggest referring to the following links:
http://blogs.msdn.com/b/san/archive/2012/09/21/iso-mounting-scenarios.aspx
http://blogs.technet.com/b/heyscriptingguy/archive/2012/10/15/oct-15-blog.aspx
Best Regards,
Vincent Wu
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.
Maybe you are looking for
-
Hi, When I ope my mail in gmail. I click on a link to open a new tab and show me what ever it is I've just clicked on....Firefox won't just load the page like it did before I upgraded to this latest version. It says: Firefox prevented this page from
-
IPhone 4S Photos directly to an external drive
Does anyone know how to transfer iPhone 4S photos directly to a WD external hard drive without the need to first transfer to iPhoto?
-
My friend said it best here the other night at my place. Everybody claims to be an expert. One guy says, Maximum Bitrate for a Playable Bluray is 22 with a target of 20, another guy says 25, this guy says that, that guy says this??? Inside Adobe the
-
TS3694 Need help for ipad 3 download error 9006
Have downloaded Itunes app and iphone5 IOS 7 successfully. Have installed IOS7 on Iphone 5 it works great. After Iphone successful install I moved to my Ipad 3. I get download error 9006 when connecting Ipad 3 in middle of the download. I have reboot
-
OrcaConfig - Good for SpiceHeads?
Hey SpiceHeads - What do you think about OrcaConfig as a potential partner of Spiceworks? They have a software tool meant to help configuration management easy for your Windows web based applications. Please and Thank You This topic first appeared in