Datatype best practice and plan cardinality

Hi,
I have a scenario where I need to store the data in the format YYYYMM (e.g. 201001 which means January, 2010).
I am trying to evaluate what is the most appropriate datatype to store this kind of data. I am comparing 2 options, NUMBER and DATE.
As the data is essentially a component of oracle date datatype and experts like Tom Kyte have proved (with examples) that using right
datatype is better for optimizer. So I was expecting that using DATE datatype will yield (at least) similar (if not better) cardinality estimates
than using NUMBER datatype. However, my tests show that when using DATE the cardinality estimates are way off from actuals whereas
using NUMBER the cardinality estimates are much closer to actuals.
My questions are:
1) What should be the most appropriate datatype used to store YYYYMM data?
2) Why does using DATE datatype yield estimates that are way off from actuals than using NUMBER datatype?
SQL> select * from V$VERSION ;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE     10.2.0.1.0     Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL>  create table a nologging as select to_number(to_char(add_months(to_date('200101','YYYYMM'),level - 1), 'YYYYMM')) id from dual connect by level <= 289 ;
Table created.
SQL> create table b (id number) ;
Table created.
SQL> begin
  2  for i in 1..8192
  3  loop
  4     insert into b select * from a ;
  5  end loop;
  6  commit;
  7  end;
  8  /
PL/SQL procedure successfully completed.
SQL> alter table a add dt date ;
Table altered.
SQL> alter table b add dt date ;
Table altered.
SQL> select to_date(200101, 'YYYYMM') from dual ;
TO_DATE(2
01-JAN-01
SQL> update a set dt = to_date(id, 'YYYYMM') ;
289 rows updated.
SQL> update b set dt = to_date(id, 'YYYYMM') ;
2367488 rows updated.
SQL> commit ;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(user, 'A', estimate_percent=>NULL) ;
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.gather_table_stats(user, 'B', estimate_percent=>NULL) ;    
SQL> explain plan for select count(*) from b where id between 200810 and 200903 ;
Explained.
SQL> select * from table(dbms_xplan.display) ;
PLAN_TABLE_OUTPUT
Plan hash value: 749587668
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time       |
|   0 | SELECT STATEMENT   |       |     1 |     5 |   824   (4)| 00:00:10 |
|   1 |  SORT AGGREGATE    |       |     1 |     5 |            |       |
|*  2 |   TABLE ACCESS FULL| B       | 46604 |   227K|   824   (4)| 00:00:10 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
   2 - filter("ID"<=200903 AND "ID">=200810)
14 rows selected.
SQL> explain plan for select count(*) from b where dt between to_date(200810, 'YYYYMM') and to_date(200903, 'YYYYMM') ;
Explained.
SQL> select * from table(dbms_xplan.display) ;
PLAN_TABLE_OUTPUT
Plan hash value: 749587668
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time       |
|   0 | SELECT STATEMENT   |       |     1 |     5 |   825   (4)| 00:00:10 |
|   1 |  SORT AGGREGATE    |       |     1 |     5 |            |       |
|*  2 |   TABLE ACCESS FULL| B       |  5919 | 29595 |   825   (4)| 00:00:10 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
   2 - filter("DT">=TO_DATE('2008-10-01 00:00:00', 'yyyy-mm-dd
           hh24:mi:ss') AND "DT"<=TO_DATE('2009-03-01 00:00:00', 'yyyy-mm-dd
           hh24:mi:ss'))
16 rows selected.

Charles,
Thanks for your response.
I did not think of the possibilitty of histograms. When I ran the tests on 10.2.0.4, I could get the results as you have shown.
So I thought it might be due to some bug in 10.2.0.1. But interestingly, when I ran the test after collecting statistics using 'FOR ALL COLUMNS SIZE 1'
option, I got the cardinalities that match my 10.2.0.1 results (where METHOD_OPT was default i.e. 'FOR ALL COLUMNS SIZE AUTO').
So I carried out the tests again on 10.2.0.1 but the results did not look consistent to me. When there were no histograms on DATE column, the cardinality
was quite close to actuals but when I collected stats using 'FOR ALL COLUMNS SIZE SKEWONLY', it generated histograms on DATE column but
the cardinality was not quite close to actuals.
So I am bit confused about whether this is due to a bug or due to combined effect of optimizer's "intelligence" while collecting statistics using default option
values and the way table is queried (COL_USAGE$ data).
Here is my test:
SQL> select * from v$version ;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE     10.2.0.1.0     Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> exec dbms_stats.delete_table_stats(user, 'B') ;
PL/SQL procedure successfully completed.
SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
no rows selected
SQL> exec dbms_stats.gather_table_stats(user, 'B') ;
PL/SQL procedure successfully completed.
SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
ID                                      289         254 HEIGHT BALANCED
DT                                      289         254 HEIGHT BALANCED
SQL> explain plan for select count(*) from b where b.id between 200810 and 200903 ;
Explained.
SQL> select * from table(dbms_xplan.display) ;
PLAN_TABLE_OUTPUT
Plan hash value: 749587668
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |     1 |     5 |  3691   (1)| 00:00:45 |
|   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
|*  2 |   TABLE ACCESS FULL| B    | 38218 |   186K|  3691   (1)| 00:00:45 |
Predicate Information (identified by operation id):
   2 - filter("B"."ID"<=200903 AND "B"."ID">=200810)
14 rows selected.
SQL> explain plan for select count(*) from b where b.dt between to_date(200810, 'YYYYMM') and to_date(200903, 'YYYYMM') ;
Explained.
SQL> select * from table(dbms_xplan.display) ;
PLAN_TABLE_OUTPUT
Plan hash value: 749587668
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |     1 |     8 |  3693   (1)| 00:00:45 |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| B    | 38218 |   298K|  3693   (1)| 00:00:45 |
Predicate Information (identified by operation id):
   2 - filter("B"."DT"<=TO_DATE('2009-03-01 00:00:00', 'yyyy-mm-dd
              hh24:mi:ss') AND "B"."DT">=TO_DATE('2008-10-01 00:00:00', 'yyyy-mm-dd
              hh24:mi:ss'))
16 rows selected.
SQL> connect sys as sysdba ;
Connected.
SQL> delete from sys.col_usage$ where obj# in (select object_id from all_objects where owner = 'HR' and object_name in ('A','B')) ;
4 rows deleted.
SQL> commit ;
Commit complete.
SQL> connect hr/hr ;
Connected.
SQL> set serveroutput on size 10000
SQL> exec dbms_stats.delete_table_stats(user, 'B') ;
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.gather_table_stats(user, 'B') ;
PL/SQL procedure successfully completed.
SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
ID                                      289           1 NONE
DT                                      289           1 NONE
SQL> explain plan for select count(*) from b where b.id between 200810 and 200903 ;
Explained.
SQL> select * from table(dbms_xplan.display) ;
PLAN_TABLE_OUTPUT
Plan hash value: 749587668
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |     1 |     5 |  3691   (1)| 00:00:45 |
|   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
|*  2 |   TABLE ACCESS FULL| B    |   110K|   541K|  3691   (1)| 00:00:45 |
Predicate Information (identified by operation id):
   2 - filter("B"."ID"<=200903 AND "B"."ID">=200810)
14 rows selected.
SQL> explain plan for select count(*) from b where b.dt between to_date(200810, 'YYYYMM') and to_date(200903, 'YYYYMM') ;
Explained.
SQL> select * from table(dbms_xplan.display) ;
PLAN_TABLE_OUTPUT
Plan hash value: 749587668
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |     1 |     8 |  3693   (1)| 00:00:45 |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| B    | 58680 |   458K|  3693   (1)| 00:00:45 |
Predicate Information (identified by operation id):
   2 - filter("B"."DT"<=TO_DATE('2009-03-01 00:00:00', 'yyyy-mm-dd
              hh24:mi:ss') AND "B"."DT">=TO_DATE('2008-10-01 00:00:00', 'yyyy-mm-dd
              hh24:mi:ss'))
16 rows selected.
SQL> exec dbms_stats.gather_table_stats(user, 'B') ;
PL/SQL procedure successfully completed.
SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
ID                                      289         254 HEIGHT BALANCED
DT                                      289           1 NONE
SQL> explain plan for select count(*) from b where b.id between 200810 and 200903 ;
Explained.
SQL> select * from table(dbms_xplan.display) ;
PLAN_TABLE_OUTPUT
Plan hash value: 749587668
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |     1 |     5 |  3690   (1)| 00:00:45 |
|   1 |  SORT AGGREGATE    |      |     1 |     5 |            |          |
|*  2 |   TABLE ACCESS FULL| B    | 46303 |   226K|  3690   (1)| 00:00:45 |
Predicate Information (identified by operation id):
   2 - filter("B"."ID"<=200903 AND "B"."ID">=200810)
14 rows selected.
SQL> explain plan for select count(*) from b where b.dt between to_date(200810, 'YYYYMM') and to_date(200903, 'YYYYMM') ;
Explained.
SQL> select * from table(dbms_xplan.display) ;
PLAN_TABLE_OUTPUT
Plan hash value: 749587668
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |      |     1 |     8 |  3692   (1)| 00:00:45 |
|   1 |  SORT AGGREGATE    |      |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| B    | 56797 |   443K|  3692   (1)| 00:00:45 |
Predicate Information (identified by operation id):
   2 - filter("B"."DT"<=TO_DATE('2009-03-01 00:00:00', 'yyyy-mm-dd
              hh24:mi:ss') AND "B"."DT">=TO_DATE('2008-10-01 00:00:00', 'yyyy-mm-dd
              hh24:mi:ss'))
16 rows selected.
SQL> exec dbms_stats.gather_table_stats(user, 'B') ;
PL/SQL procedure successfully completed.
SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
ID                                      289         254 HEIGHT BALANCED
DT                                      289           1 NONE
SQL> exec dbms_stats.gather_table_stats(user, 'B', method_opt=>'FOR ALL COLUMNS SIZE SKEWONLY') ;
PL/SQL procedure successfully completed.
SQL> select column_name, num_distinct, num_buckets, histogram from user_tab_col_statistics where table_name = 'B' ;
COLUMN_NAME                 NUM_DISTINCT NUM_BUCKETS HISTOGRAM
ID                         289         254 HEIGHT BALANCED
DT                         289         254 HEIGHT BALANCED
SQL> explain plan for select count(*) from b where b.dt between to_date(200810, 'YYYYMM') and to_date(200903, 'YYYYMM') ;
Explained.
SQL> select * from table(dbms_xplan.display) ;
PLAN_TABLE_OUTPUT
Plan hash value: 749587668
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time       |
|   0 | SELECT STATEMENT   |       |     1 |     8 |  3692   (1)| 00:00:45 |
|   1 |  SORT AGGREGATE    |       |     1 |     8 |            |       |
|*  2 |   TABLE ACCESS FULL| B       | 27862 |   217K|  3692   (1)| 00:00:45 |
Predicate Information (identified by operation id):
   2 - filter("B"."DT"<=TO_DATE('2009-03-01 00:00:00', 'yyyy-mm-dd
           hh24:mi:ss') AND "B"."DT">=TO_DATE('2008-10-01 00:00:00', 'yyyy-mm-dd
           hh24:mi:ss'))
16 rows selected.
SQL> explain plan for select count(*) from b where id between 200810 and 200903 ;
Explained.
SQL> select * from table(dbms_xplan.display) ;
PLAN_TABLE_OUTPUT
Plan hash value: 749587668
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time       |
|   0 | SELECT STATEMENT   |       |     1 |     5 |  3690   (1)| 00:00:45 |
|   1 |  SORT AGGREGATE    |       |     1 |     5 |            |       |
|*  2 |   TABLE ACCESS FULL| B       | 32505 |   158K|  3690   (1)| 00:00:45 |
Predicate Information (identified by operation id):
   2 - filter("ID"<=200903 AND "ID">=200810)
14 rows selected.

Similar Messages

  • Best Practice for Planning and BI

    What's the best practice for Planning and BI infrastructure - set up combined on one box or separate? What are the factors to consider?
    Thanks in advance..

    There is no way that question could be answered with the information that has been provided.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • Best Practices & Strategy Planning for SAP BI Architecture

    What are best practices and strategy planning that SAP BI Architect should know?
    What are the challenges are involved with this role ?
    What are the other information that this Architect should know to deliver the robust BI solution?
    Is there any white papers on the best practices on Architecture & Data Modeling, please ?
    Thanks,
    Venkat.

    Hi
    As per the Best Practice  first load the master and next transaction data .
    Please find the link for best practices
    http://www.sap.com/services/pdf/BWP_SAP_Best_Practices_for_Business_Intelligence.pdf.
    Regarding the architecture it depend upon the size of volumen and how much frequency  your load and  hard ware sizing
    based on this  we can provide best  solution
    If you any issues please let me know
    Regards
    Madan

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • What is the best practice and Microsoft best recommended procedure of placing "FSMO Roles on Primary Domain Controller (PDC) and Additional Domain Controller (ADC)"??

    Hi,
    I have Windows Server 2008 Enterprise  and have
    2 Domain Controllers in my Company:
    Primary Domain Controller (PDC)
    Additional Domain Controller (ADC)
    My (PDC) was down due to Hardware failure, but somehow I got a chance to get it up and transferred
    (5) FSMO Roles from (PDC) to (ADC).
    Now my (PDC) is rectified and UP with same configurations and settings.  (I did not install new OS or Domain Controller in existing PDC Server).
    Finally I want it to move back the (FSMO Roles) from
    (ADC) to (PDC) to get UP and operational my (PDC) as Primary. 
    (Before Disaster my PDC had 5 FSMO Roles).
    Here I want to know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    In case if Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    Example like (FSMO Roles Distribution between both Servers) should be……. ???
    Primary Domain Controller (PDC) Should contains:????
    Schema Master
    Domain Naming Master
    Additional Domain Controller (ADC) Should contains:????
    RID
    PDC Emulator
    Infrastructure Master
    Please let me know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles.
    I will be waiting for your valuable comments.
    Regards,
    Muhammad Daud

    Here I want to know the best practice
    and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    There is a good article I would like to share with you:http://oreilly.com/pub/a/windows/2004/06/15/fsmo.html
    For me, I do not really see a need to have FSMO roles on multiple servers in your case. I would recommend making it simple and have a single DC holding all the FSMO roles.
    In case if
    Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    No. This is not true. Each FSMO role is unique and if a DC fails, FSMO roles will not be automatically transferred.
    There is two approaches that can be followed when an FSMO roles holder is down:
    If the DC can be recovered quickly then I would recommend taking no action
    If the DC will be down for a long time or cannot be recovered then I would recommend that you size FSMO roles and do a metadata cleanup
    Attention! For (2) the old FSMO holder should never be up and online again if the FSMO roles were sized. Otherwise, your AD may be facing huge impacts and side effects.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • Best Practices and Usage of Streamwork?

    Hi All -
    Is this Forum a good place to inquire about best practices and use of Streamwork? I am not a developer working with the APIs, but rather have setup a Streamwork Activity for my team to collaborate on our activities.
    We are thinking about creating a sort of FAQ on our team activity and I was thinking of using either a table or a collection for this. I want it to be easy for team members to enter the question and the answer (our team gets a lot of questions from many groups and over time I would like to build up a sort of knowledge base).
    Does anyone have any suggestions for such a concept in StreamWork? Has anyone done something like this and can share experiences?
    Please let me know if I should post this question in another place.
    Thanks and regards,
    Rob Stevenson

    Activities have a limit of 200 items that can be included.  If this is the venue you wish to use,  it might be better to use a table rather than individual notes/discussions.

  • Coherence Best Practices and Performance

    I'm starting to use coherence and I'd to know if someone could point me out some doc on Best Practices and Performance optimizations when using it.
    BTW, I haven't had the time to go through the entire Oracle documentation.
    Regards

    Hi
    If you are new to Coherence (or even for people who are not that new) one of the best things you can do is read this book http://www.packtpub.com/oracle-coherence-35/book I know it says Coherence 3.5 and we are currently on 3.7 but it is still very relevant.
    You don't need to go through all the documentation but at least try the introductions and try out some of the examples. You need to know the basics otherwise it makes it harder for people to either understand what you want or give you detailed enough answers to questions.
    For performance optimizations it depends a lot on your use cases and what you are doing; there are a number of things you can do with Coherence to help performance but as with anything there are trade-offs. Coherence on the server-side is a Java process and often when tuning, sorting out issues and performance I spend a lot of time with the usual tools for Java such as VisualVM (or JConsole), tuning GC, looking at thread dumps and stack traces.
    Finally, there are plenty of people on these forums happy to answer your questions in return for a few forum points, so just ask.
    JK

  • Best Practice for Plan for Every Part (PFEP) Database/Dashboard?

    Hello All-
    I was wondering if anyone had experience with implementing / developing a Plan for Every Part (PFEP) Database in SAP. My company is looking to migrate its existing PFEP solution (Custom developed Excel/Access system) into SAP. If you are unfamiliar, a PFEP is a dashboard view of a part/material that provides various business groups with dedicated views to data from Material Masters, Info Records, and Vendor Master Records and combines it with historical/forecasting information. The goal is to provide a single source to all the part/material settings for a given part.
    Is there a Best Practice PFEP in SAP? Or if this is something that most companies custom develop in ERP or BI?
    Thanks in advance.
    -Ron

    I think you will likely get a response in SAP ERP - Logistics Materials Management (SAP MM)
    additionally you might want to do some searches based on SAP Lean Inventory, perhaps Kanban. I am assuming you are not using WM or EWM either?
    Where I have seen PFEP incorporated into the supply chain strategy this typically requires not inconsiderable additions to the alternate UoM in MM dropping of automatic replenishment levels (reorder level) and rethinking aspects of the MRP plan so be prepared or significant additional data management work if you haven't already started on that. I believe Ryder logistics uses PFEP and theirSAP infrstructure is managed by IBM; might be an idea to try and find a linkedin  resource from there. You may also find one of the ASUG supply chain,logistics,  MM or WM sigs a good place to also ask questions and look for answers.

  • Best practice: Deployment plan for cluster environment

    Hi All,
    I want to know, which way is the best practice for preparing and deploying new configuration for WLS-cluster environment. How can I plan a simultan deployment of ALL of nodes, with out single point of failure?
    Regards,
    Moh

    Hi All,
    I get the Answer as followed:
    When you deploy an application OR redeploy an application, the deployment is initiated from the Admin Server and it it initiated on all targets (managed servers in the cluster) at the same time based on targets (which is expected to be cluster).
    We recommend that applications should be targeted to a cluster instead of individual servers whenever a cluster configuration is available.
    So, as long as you target the application to the cluster, the admin server will initiate the deployment on all the servers in a cluster at the same type, so application is in sync on all servers.
    Hope that answers your queries. If not, please let me know what exactly you mean by synchronization.
    Regards,
    Moh

  • SAP best practice and ASAP methodology

    Hi,
            Can any body please explain me
                                                          1. What is SAP best practice?
                                                           2. What is ASAP methodology?
    Regards
    Deep

    Dear,
    Please refer these links,
    [SAP best practice |http://www12.sap.com/services/bysubject/servsuptech/servicedetail.epx?context=0DFA5A0C701B93893897C14DC7FFA7D62DC24E6E9A4B8FFC77CA0603A1ECCF58A86F0DCC6CCC177ED84EA76F625FC1E9C6DCDA90C9389A397DAB524E480931FB6B96F168ACE1F8BA2AFC61C9F8A28B651682A04F7CEAA0C4%7c0E320720D451E81CDACA9CEB479AA7E5E2B8164BEC98FE2B092F54AF5F9035AABA8D9DDCD87520DB9DA337A831009FFCF6D9C0658A98A195866EC702B63C1173C6972CA72A1F8CB611798A53C885CA23A3C0521D54A19FD1B3FD9FF5BB48CFCC26B9150F09FF3EAD843053088C59B01E24EA8E8F76BF32B1DB712E8E2A007E7F93D85AF466885BBD78A8187490370C3CB3F23FCBC9A1A0D7]
    [ASAP methodology|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/asap%2bfocus%]
    ASAP methodology is one methodlogy used in implementing SAP .
    The ASAP methodology adheres to a specific road map that addresses the following five general Phases:
    Project Preparation, in which the project team is identified and mobilized, the project Standards are defined, and the project work environment is set up;
    Blueprint, in which the business processes are defined and the business blueprint document is designed;
    Realization, in which the system is configured, knowledge transfer occurs, extensive unit testing is completed, and data mappings and data requirements for migration are defined;
    Final Preparation, in which final integration testing, stress testing, and conversion testing are conducted, and all end users are trained; and
    Go-Live and Support, in which the data is migrated from the legacy systems, the new system are activated, and post-implementation support is provided.
    ASAP incorporates standard design templates and accelerators covering every functional area within the system, as well as supporting all implementation processes. Complementing the ASAP accelerators, the project manager can create a comprehensive project plan, covering the overall project, project staffing plan, and each sub-process such as system testing, communication and data migration. Milestones are set for every work path, and progress is carefully tracked by the project management team.
    Hope it will help you.
    Regards,
    R.Brahmankar

  • BPC Best Practices: Sales Planning (BP2)

    Im trying to follow these instructions:
    1. Log on to Interface for Excel.
    To do so start the SAP BPC Launch Page from your desktop icon or via the Start menu of your desktop, then in the Programs folder choose SAP -> Business Planning and Consolidation.
    2. On the SAP BPC Launch Page, choose Interface for Excel. In the dialog box, select the AppSet SAP_BP_Planning and the Application Sales_Planning.
    My problem is that in my BPC Launch Page (at step 2) I don't have these options:
    AppSet: SAP_BP_Planning
    Application: Sales_Planning
    These are the options that I have:
    AppSet: ApShell (in the combo box)
    Application:         (nothing in the combo box)
    Can anyone figure it out why i dont have those options (SAP_BP_Planning, Sales_Planning) ?
    Thanks

    The version I have is BPC 7.5 SP4 with SQL Server 2005
    Im new at this, I installed this software in a Windows Server 2003 Virtual Machine and now im trying to learn how to use this software. I have downloaded the configuration guide and those steps are there. This is the only one step that i couldn't follow:
    3 Prerequisites
    Before you start installing this BPC scenario, you must install prerequisite scenarios. For more information, see the BPC prerequisite matrix (Prerequisites_Matrix_[xx]_EN_JP.xls; the placeholder [xx] depends on the SAP Best Practices version you use, for example, BPC refers to the SAP Best Practices SAP BusinessObjects Planning and Consolidation 7.5: Prerequisites_Matrix_BPC_EN_JP.xls). This document can be found on the SAP Best Practices documentation DVD in the folder \BPC_JP\Documentation\.
    I couldn't find that file in the Best Practices folder (50101040)
    Thanks

  • IronPort best practices and configuration guide

    Hi there,
    I manage a Cisco IronPort ESA appliance for my organisation and made a quick blog post last night about things I thought should be a best practice for a new ESA appliance.
    The reason I wrote this is because some of these things are not configured from the start or are configured poorly by default.
    Take a look and let me know what you think - I plan to make a part 2 because there are some things I did not have time to go through and it was quite long already!
    Remember that your environment will be different from mine so you should understand the things I say before blindly implementing them!
    http://emtunc.org/blog/06/2014/cisco-ironport-e-mail-security-appliance-best-practices-part-1/

    First of all, I think your question is related to the WebCenter (Framework) as such, not just OUCSS.
    As for JDev. vs. run-time, this question is well discussed in Yannick Ongena's tutorial: http://www.yonaweb.be/webcenter_tutorial/part1_configure_webcenter_portal_application
    "Let me first talk a bit about the architecture of WebCenter and the runtime customizations. ADF (and WebCenter) has an additional component since 11g called the MDS (MetaDataServices). The MDS is a repository that stores all the customizations. The page we just created at runtime is not stored in the project folder of JDeveloper but is instead stored in the MDS."
    I guess the answer when to use which methods depends on the situation what page you want to create.
    I am surprised, however, that you state that
    Pages created in JDeveloper are not searchable online. It is possible to link it to a Navigation Model but the path needs to be manually entered.Could you elaborate on your use case?
    As for navigation models, you can check another tutorial: http://docs.oracle.com/cd/E21764_01/webcenter.1111/e10148/jpsdg_navigation.htm#BABJHFCE
    Maybe, what your are looking for is the way how to create a navigation model according to your needs?

  • JSP Best Practices and Oracle Report

    Hello,
    I am writing an application that obtains information from the user using a JSP/HTML form and then submitted to a database, the JSP page is setup using JSP Best Practices in which the SQL statments, database connectivity information, and most of the Java source code in a java bean/java class. I want to use Oracle Reports to call this bean, and generate a JSP page displaying the information the user requested from the database. Would you please offer me guidance for setting this up.
    Thank you,
    Michelle

    JSP Best Practices.
    More JSP Best Practices
    But the most important Best Practice has already been given in this thread: use JSP pages for presentation only.

  • Real time logging: best practices and questions ?

    I've 4 couples of DS 5.2p6 in MMR mode on Windows 2003.
    Each server is configured with the default setting of "nsslapd-accesslog-logbuffering" enabled, and the log files are stored on a local file system, then later centrally archived thanks to a log sender daemon.
    I've now a requirement from a monitoring tool (used to establish correlations/links/events between applications) to provide the directory
    server access logs in real time.
    At a first glance, each directory generates about 1,1 Mb of access log per second.
    1)
    I'd like to know if there're known best practices / experiences in such a case.
    2)
    Also, should I upgrade my DS servers to benefit from any log management related feature ? Should I think about using an external disk
    sub-sytem (SAN, NAS ....) ?
    3)
    In DS 5.2, what's the default access logbuffering policy : is there a maximum buffer size and/or time limit before flushing to disk ? Is it configurable ?

    Usually log-buffering should be enabled. I don't know of any customers who turn it off. Even if you do, I guess it should be after careful evaluation in your environment. AFAIK, there is no configurable limit for buffer size or time limit before it is committed to disk
    Regarding faster disks, I had the bright idea that you could creating a ramdisk and set the logs to go there instead of disk. Let's say the ramdisk is 2gb max in size and you receive about 1MB/sec in writes. Say max-log-size is 30MB. You can schedule a job to run every minute that copies over the newly rotated file(s) from ramdisk to your filesystem and then send it over to logs HQ. If the server does crash, you'll lose upto a minute of logs. Of course, the data disappears after reboot, so you'll need to manage that as well. Sounds like fun to try but may not be practical.
    Ramdisk on windows
    [http://msdn.microsoft.com/en-us/library/dd163312.aspx]
    Ramdisk on solaris
    [http://wikis.sun.com/display/BigAdmin/Talking+about+RAM+disks+in+the+Solaris+OS]
    [http://docs.sun.com/app/docs/doc/816-5166/ramdiskadm-1m?a=view]
    I should ask, how realtime should this log correlation be?
    Edited by: etst123 on Jul 23, 2009 1:04 PM

Maybe you are looking for

  • How To Set Up a Virtual Windows 7 Pro OS on a Windows 7 Pro Host - 64-bit

    This may be as easy as stating, "Windows 7 Pro 64-bit cannot run on a Windows 7 64-bit host." Most of the documentation I find states something to that effect - however, ALL of that documentation is 5-8 years old. I cannot find ANYTHING in the last y

  • Problem in logging in to newly created client in IDES mySAPERP ECC 6.0

    Hi, I have installed IDES mySAP ERP 2005 ECC6.0 and using client 800.I want to make a copy of the client 800.I created a new client 910 using SCC4.When I try to login to 910 using SAP* and password PASS (I tried both upper case and lower case) it doe

  • People cant hear me

    Im having problems with my iphone 4, i can make and recieve calls, but the person on the other end cant hear anything ive tried updating it and rebooting it, and nothing seems to work does anyone have any suggesetions??? Thanks in advance P.S I can m

  • Material Master Replication R/3 to SRM

    Hello SRM Experts, I am in a process of replicating MM from R/3 to SRM server. This is not a new one,already some materials are loaded into SRM. Ths issue what I am trying to understand is I have many materials,material Types and  material groups add

  • When i enter a search term in firefox9.01,a page:Cisco Guest Acces appears, askin for a passwordg

    When i enter a searchterm in the Google searchbox, a blue screen, Cisco guest acces appears, asking for a password, which of course i do not have. This problem does not present itself all the time but very frequently. I have to restart the system and