Rebuild Table

Table where lots of deletion or insertion takes place , probably has High HWM. If the HWM is high , it is better to rebuild table segment for performance sake.
my question is when to rebuild a table/index and how to find that a table has High water mark?
please revert me with the answer.

hi
to find high water mark ...
here it is .......
There is no single system table which contains the high water mark (HWM) for a table. A table's HWM can be calculated using the results from the following SQL statements:
SELECT BLOCKS
FROM DBA_SEGMENTS
WHERE OWNER=UPPER(owner) AND SEGMENT_NAME = UPPER(table);
ANALYZE TABLE owner.table ESTIMATE STATISTICS;
SELECT EMPTY_BLOCKS
FROM DBA_TABLES
WHERE OWNER=UPPER(owner) AND TABLE_NAME = UPPER(table);
Thus, the tables' HWM = (query result 1) - (query result 2) - 1
NOTE: You can also use the DBMS_SPACE package and calculate the HWM = TOTAL_BLOCKS - UNUSED_BLOCKS - 1.
and if the index rebuild...
http://www.quest-pipelines.com/newsletter-v2/rebuild.htm
hope this helps
CHeers

Similar Messages

  • When rebuild table and index

    Hi,
    how to know if for a table we should rebuild table and index ?
    Many thanks.

    Safi, I would be less sure than that. There were many many very long threads in database general forum about that subject. If you observed that fact once, that is not very often the case.
    @OP,
    Assuming the question was not about performance, but more on AppDesigner side (because of rebuild table), what I can say is :
    1. Peoplesoft store all the storage definition for all the objects in its own metamodel tables
    2. when you are changing field definition (column's definition) or indexes definition, the new definition is stored in the Peoplesoft metamodel tables
    3. On the table/index build, AppDesigner compare the Peoplesoft metamodel definition for that particular object with in the Oracle (or what ever else database you are using) definition of that same object.
    4. If the defintions reported on step 3 are different, then AppDesigner will fill a file with all the (re)build statements of the objects you are rebuilding (and different).
    5. Finally, you just have to run the generated script.
    Of course, you can bypass the comparison by enforcing AppDesigner the (re)build the record/index in all cases, however, it is not always (do not say never) recommanded.
    Now, when you have to rebuild table (or record in AppDesigner's word) or indexes ? Basically, by the previous explanation I gave above, you could understand this is when you are changing a field/table/index definition through AppDesigner.
    Nicolas.

  • Rebuild table when actionperformed

    I'm trying to rebuild my table to reflect the changed data when my calculate button is clicked. I'm getting a null pointer exception on compile
    public class DennisBillingsWK3 extends JPanel implements ActionListener
         private double principal = 200000; 
         private int term[] = {30, 15, 7};               
         private double interestRate[] = {5.75, 5.5, 5.35};
         private double payment = calcPayment(0); 
         private String data[] = {"30 years at 5.75%", "15 years at 5.5%", "7 years at 5.35%"};
         private Object tableData[][];
         String[] colNames = {"Period", "Payment", "Interest", "Loan Balance"};
         private JLabel jlPrincipal;
         private JLabel jlLoanOptions;
         private JLabel jlPayment;
         private JScrollPane scrollPane;
         private JButton calculate;
         private JButton exit;
         private JFormattedTextField ftfPrincipal;
         private JFormattedTextField ftfPayment;
         private JTable table;
         private JComboBox loanOptions;
         public static void main(String[] args)
              DennisBillingsWK3 loan = new DennisBillingsWK3();
              loan.createGUI();
         public DennisBillingsWK3()
              super(new BorderLayout());
              jlPrincipal = new JLabel("Principal: ");
              jlPayment = new JLabel("Payment: ");
              jlLoanOptions = new JLabel("Loan options");
              ftfPrincipal = new JFormattedTextField();
              ftfPrincipal.setValue(new Double(principal));
              ftfPrincipal.setColumns(10);
              ftfPrincipal.addActionListener(this);
              ftfPayment = new JFormattedTextField();
              ftfPayment.setValue(new Double(payment));
              ftfPayment.setColumns(10);
              ftfPayment.setEditable(false);
              ftfPayment.setForeground(Color.black);
              table = new JTable();
              JScrollPane scrollPane = new JScrollPane(table);
              table.setFillsViewportHeight(true);
              loanOptions = new JComboBox(data);
              jlPrincipal.setLabelFor(ftfPrincipal);
              jlPayment.setLabelFor(ftfPayment);
              jlLoanOptions.setLabelFor(loanOptions);
              JPanel labelPane = new JPanel(new GridLayout(0,1));
              labelPane.add(jlPrincipal);
              labelPane.add(jlLoanOptions);
              labelPane.add(jlPayment);
              JPanel fieldPane = new JPanel(new GridLayout(0,1));
              fieldPane.add(ftfPrincipal);
              fieldPane.add(loanOptions);
              fieldPane.add(ftfPayment);
              JButton calculate = new JButton("Calculate");
              calculate.setActionCommand("calculate");
              calculate.addActionListener(this);
              JButton exit = new JButton("Exit");
              exit.setActionCommand("exit");
              exit.addActionListener(this);
              JButton clear = new JButton("Clear");
              clear.setActionCommand("clear");
              clear.addActionListener(this);
              JPanel buttonPane = new JPanel(new GridLayout(0,1));
              buttonPane.add(calculate);
              buttonPane.add(exit);
              buttonPane.add(clear);
              setBorder (BorderFactory.createEmptyBorder(30, 30, 30, 30));
              add(labelPane, BorderLayout.CENTER);
              add(fieldPane, BorderLayout.EAST);
              add(buttonPane, BorderLayout.SOUTH);
              add(scrollPane, BorderLayout.NORTH);
         public void createGUI()
              JFrame frame = new JFrame("Mortgage Calculator");
              frame.setBounds(350,300,600,150);
              frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
              frame.add(new DennisBillingsWK3());
              frame.pack();
              frame.setVisible(true);
         public double calcPayment(int i)
              double adjRate = interestRate/(12*100);
              int adjTerm = term[i]*12;
              payment=principal*((adjRate*Math.pow((1+adjRate),adjTerm))/(Math.pow((1+adjRate),adjTerm)-1));
              BigDecimal hundredths = new BigDecimal(payment);
              hundredths = hundredths.setScale(2, BigDecimal.ROUND_UP);
              return hundredths.doubleValue();
         public void calcAmortTable(int i)
              double adjRate = interestRate[i]/(12*100);
              int adjTerm = term[i]*12;
              payment = calcPayment(i);
              double interest;
              double runningPrincipal = principal;
              DecimalFormat money = new DecimalFormat("$0.00");
              tableData = new Object[adjTerm][4];
              for(int x=0; x<adjTerm; x++) {
                   interest = adjRate*runningPrincipal;
                   runningPrincipal -= payment - interest;
                   tableData[x][0]= x+1;
                   tableData[x][1]= money.format(payment);
                   tableData[x][2]= money.format(interest);
                   tableData[x][3]= money.format(runningPrincipal);
              table = new JTable(tableData, colNames);
              scrollPane.add(table);
              scrollPane.validate();
         public void actionPerformed(ActionEvent e)
              if("calculate".equals(e.getActionCommand())) {
                   try {
                             if(principal <= 0 || ftfPrincipal.getValue() == null) {
                                  throw new NumberFormatException();
                             principal = ((Number)ftfPrincipal.getValue()).doubleValue();
                             int loanIndex = loanOptions.getSelectedIndex();
                             ftfPrincipal.setValue(principal);
                             ftfPayment.setValue(calcPayment(loanIndex));
                             calcAmortTable(loanIndex);
                   catch(NumberFormatException err)
                             JOptionPane.showMessageDialog(null,"No fields can be 0 or empty","Error",JOptionPane.INFORMATION_MESSAGE);
              if("exit".equals(e.getActionCommand())) {
                   System.exit(0);//System.exit closes application     
              if("clear".equals(e.getActionCommand())) {
                   ftfPrincipal.setValue(null);
                   ftfPayment.setValue(null);

    I snipped the code out of my constructor and removed the field and I made the following changes but my table doesn't show. I don't get any error or exceptions but not table
    Here's my constructor and calcAmortTable respectively
    public DennisBillingsWK3()
              // The super keyword calls the constructor from the parent class JPanel
              super(new BorderLayout());
              //set values for textfield labels
              jlPrincipal = new JLabel("Principal: ");
              jlPayment = new JLabel("Payment: ");
              jlLoanOptions = new JLabel("Loan options");
              //Setup text fields, initial value, number of positions and action listener
              ftfPrincipal = new JFormattedTextField();
              ftfPrincipal.setValue(new Double(principal));
              ftfPrincipal.setColumns(10);
              ftfPrincipal.addActionListener(this);
              ftfPayment = new JFormattedTextField();
              ftfPayment.setValue(new Double(payment));
              ftfPayment.setColumns(10);
              ftfPayment.setEditable(false);
              ftfPayment.setForeground(Color.black);
              //Setup amortization table
              table = new JTable();
              table.setFillsViewportHeight(true);
              //Setup combo box
              loanOptions = new JComboBox(data);
              //Attach labels to text fields
              jlPrincipal.setLabelFor(ftfPrincipal);
              jlPayment.setLabelFor(ftfPayment);
              jlLoanOptions.setLabelFor(loanOptions);
              //set labels and fields in a grid
              JPanel labelPane = new JPanel(new GridLayout(0,1));
              labelPane.add(jlPrincipal);
              labelPane.add(jlLoanOptions);
              labelPane.add(jlPayment);
              //create panel for textfields and add fields
              JPanel fieldPane = new JPanel(new GridLayout(0,1));
              fieldPane.add(ftfPrincipal);
              fieldPane.add(loanOptions);
              fieldPane.add(ftfPayment);
              //create buttons, set action commands and listener to be used in ActionPerformed method
              JButton calculate = new JButton("Calculate");
              calculate.setActionCommand("calculate");
              calculate.addActionListener(this);
              JButton exit = new JButton("Exit");
              exit.setActionCommand("exit");
              exit.addActionListener(this);
              JButton clear = new JButton("Clear");
              clear.setActionCommand("clear");
              clear.addActionListener(this);
              //Create panel for buttons
              JPanel buttonPane = new JPanel(new GridLayout(0,1));
              buttonPane.add(calculate);
              buttonPane.add(exit);
              buttonPane.add(clear);
              //set border for GUI and add panels to container
              setBorder (BorderFactory.createEmptyBorder(30, 30, 30, 30));
              add(labelPane, BorderLayout.CENTER);
              add(fieldPane, BorderLayout.EAST);
              add(buttonPane, BorderLayout.SOUTH);
    public void calcAmortTable(int i)
              double adjRate = interestRate/(12*100);
              int adjTerm = term[i]*12;
              payment = calcPayment(i);
              double interest;
              double runningPrincipal = principal;
              DecimalFormat money = new DecimalFormat("$0.00");
              tableData = new Object[adjTerm][4];
              for(int x=0; x<adjTerm; x++) {
                   interest = adjRate*runningPrincipal;
                   runningPrincipal -= payment - interest;
                   tableData[x][0]= x+1;
                   tableData[x][1]= money.format(payment);
                   tableData[x][2]= money.format(interest);
                   tableData[x][3]= money.format(runningPrincipal);
              table = new JTable(tableData, colNames);
              JScrollPane scrollPane = new JScrollPane();
              table.setFillsViewportHeight(true);
              add(scrollPane, BorderLayout.NORTH);

  • Error in filling the table.

    Hi to all
            I am trying to fill the setup table for 2lis_03_bf.But i am getting the following error
    : No extract structure active or no bw connected.
    when i intimated to onsite team this very simple issue i ahve to active the D.S.When i checked it is in active only.Please give me some solution.
    Thanks to all

    Hi
    Take a look at this SAP OSS Note 914268
    Symptom
    Within an instance, you use the NetWeaver components of SAP ERP and BW.
    When you execute transactions like to fill rebuild tables in Logistics, the system issues the termination message M2 630:
    &#61607;     No extraction structure active or no BW connected
    The connection test and the settings for the Myself source system are definitely correctly maintained.
    Other terms
    OLTP, extractor, data extraction, DataSource, API service, SAPI, RSA2_SERV_BW_CONNECTED, OLI1BW, OLIZBW, OLI3BW, OLI9BW, 2LIS_XX_XXX, M2630, SBIW, LIS, rebuild
    Reason and Prerequisites
    The problem is caused by a program error.
    Prerequisites:
    Within an instance, you are using the NetWeaver components of SAP ERP and BW. Thus, potential data loading processes are executed by using the Myself source system connection.
    No other BW system is connected for data as a target system.
    Cause:
    Prior to NetWeaver, you could not operate ERP and BW in one common system. For this reason, the case that Logistics data need to be loaded into the same system for BW functions was not considered. The system as "source system" does not recognize that the system is connected as a "BW system".
    Solution
    To correct the problem, you need Service API 7.00 Support Package 08 or PI 2004.1 4.6C Support Package 13 in the affected source system. The attached composite SAP note informs you on which software components contain Service API 7.0 and lists the corresponding Support Packages of these components.
    Alternatively, implement the attached program corrections.
    In the BW system, you only need to implement the correction if the BW system is affected in its role as a source system for itselt (myself connection) or for another BW system.
    Caution: Service API is an internal component. Therfore, it is not delivered separately but as a part of other software components: PI, CRM, PI_BASIS and several others. Thus, a composite SAP note is created for each SAPI correction note. This composite note describes which Support Packages of the above mentioned software components contain the SAPI correction. However, the composite SAP note will only be released when the corresponding SAPI Support Packages are released.
    Regards
    Pradip

  • Table space not reduce after delete in oracle 10g

    Hi..
    Based on my system, i have found that my oracle table space did not reduce after the deletion query. Why ?.. Could somebody help me. As your info, I am using oracle 10g.
    Thank you,
    Baharin

    After Delete the table space will not be set free. high water mark will not be reset. to regain the space you need to recognize the objects from which you deleted the data. This can be done in many ways.
    1) Move the objects.
    Alter table temp move --> optionally tablespace clause can be used. After this you need to rebuild table indexes.
    2) With 10g table can be shrinked or reorganize to free the space.
    alter table mytable enable row movement;
    alter table mytable shrink space;
    3) Export/Import
    export the objects and drop and recreate with import.

  • Oracle Table Recomendations

    Hello All,
    We have source data for prepaid system like voice calls, sms etc.
    Oracle database 10G is our target database.
    Average number of records for voice is 20 million and sms is having 10 million records per day.
    We have two options for loading data in oracle tables.
    1. Load the source data into separate tables like voice calls data into voice_calls partitioned(call_date)
    and sms data into sms_data table (also partitioned on date).
    2. Second way is that we load all calculated columns into one table.
    Columns like call_type(voice/sms),mobile_number,call_date,duration,balance_used etc
    Partition the table based on call_type and sub partitioned on call_date column
    And (if possible) further sub partition on call_dir i.e either incomingcall/outgoingcall
    And place the remaining columns from source data (which are not required for analysis) into separate tables.
    Benefit of adopting the second technique is that during analysis of revenue, we can get all information from one table and we don’t need to join multiple tables.
    Now my questions are:
    Is it feasible to create the calculated columns table having data from different sources as its size will increase apx 10GB per day or it is recommended to create different calculated tables?
    Which method will give us performance and retrieval benefits?
    Any performance issue while inserting/selecting data from such fact table with large data volume and sub partitions?
    Any problem during analyzing of such table?
    Thanks

    hi~
    1. tablespace coalesce(공간 병합) 을 권장합니다.
    2. index rebuild : 이것은 Table에 Index가 걸려있는 상황에서 DML이 자주 발생하면 delete marking된 index entry가 많아지는데 이것을 제거하여 공간 및 검색 효율을 향상시킬수 있습니다.

  • How to serach most fragmented tables in database(10g)

    How to serach most fragmented tables in database(10g) and query

    I mean
    Most DML operations happened(mainly deletion ) by
    which HWM is set for the table. I know by rebuilding
    table segment can be compressed and we ggain free
    space in tablespace too.OK, but to what end do you gain that free space in the tablespace? Say you had a table of 1,000,000 rows, and you deleted 900,000 of those rows, emptying out 'x' of 'y' extents. If you would expect the table to again grow to 1,000,000 rows (not an unreasonable assumption) then you will just need to reclaim again (by grabbing new extents) the space you freed up with your reorg.

  • Problem with setup table fillup

    Hi,
    I wish to extract data using the business content Data source 2LIS_02_HDR( Purchasing). I have activated the extract structure and all other necessary steps. I have deleted the setup table. When I fillup the set up table I get the error " No Extraction structure is active or no BW is connected". I checked with basis guy regarding the connectivity which is absolutely fine. Please let me know in case you have faced this kind of problem before.
    Thanks in advance,
    Tapan

    hi,
    oss note 914268-No extraction structure active or no BW connected
    Symptom
    Within an instance, you use the NetWeaver components of SAP ERP and BW.
    When you execute transactions like to fill rebuild tables in Logistics, the system issues the termination message M2 630:
    No extraction structure active or no BW connected
    The connection test and the settings for the Myself source system are definitely correctly maintained.
    Other terms
    OLTP, extractor, data extraction, DataSource, API service, SAPI, RSA2_SERV_BW_CONNECTED, OLI1BW, OLIZBW, OLI3BW, OLI9BW, 2LIS_XX_XXX, M2630, SBIW, LIS, rebuild
    Reason and Prerequisites
    The problem is caused by a program error.
    Prerequisites:
    Within an instance, you are using the NetWeaver components of SAP ERP and BW. Thus, potential data loading processes are executed by using the Myself source system connection.
    No other BW system is connected for data as a target system.
    Cause:
    Prior to NetWeaver, you could not operate ERP and BW in one common system. For this reason, the case that Logistics data need to be loaded into the same system for BW functions was not considered. The system as "source system" does not recognize that the system is connected as a "BW system".
    Solution
    To correct the problem, you need Service API 7.00 Support Package 08 or PI 2004.1 4.6C Support Package 13 in the affected source system. The attached composite SAP note informs you on which software components contain Service API 7.0 and lists the corresponding Support Packages of these components.
    Alternatively, implement the attached program corrections.
    In the BW system, you only need to implement the correction if the BW system is affected in its role as a source system for itselt (myself connection) or for another BW system.
    Caution: Service API is an internal component. Therfore, it is not delivered separately but as a part of other software components: PI, CRM, PI_BASIS and several others. Thus, a composite SAP note is created for each SAPI correction note. This composite note describes which Support Packages of the above mentioned software components contain the SAPI correction. However, the composite SAP note will only be released when the corresponding SAPI Support Packages are released.

  • Factory calendar table update

    Hello gurus,
    An urgent requirement!!!
    I have three Source system configured in BW.
    The FActory cal id for the two systems were not found in BW.
    When i use rebuild table for the factory calendar id using transfer global setting option in the source system it updates the table but over writes it when i use the same option for other source system.
    Suggestion please
    Thanks in advance
    Ram

    Hi,
    the program doesn't admit the update on factory calendar!!
    You need an alternative way.
    I suggest two possibility:
    Transaction SCU0 to compare all the tables by RFC in remote system, (you can automate this comparison in a program) But the comparing will be for all tables:
    TCALS
    TFACD
    TFACS
    TFACT
    TFAIN
    TFAIT
    THOC
    THOCD
    THOCI
    THOCS
    THOCT
    THOL
    THOLT
    THOLU
    or for the IMG activity related to calendar.
    Other way is to use the BC set functionality. When you make modification in a calendar in source system you insert all modifications in a request, then you can get this request in a BC set. After you can download it and upload in BW.
    Regards,
    Sergio

  • 0UNIT - No SID value found for value 'FYR' of characteristic 0UNIT

    Dear All,
    Request your help for the following issue observed.
    Please find below the details of the issue observed:
    Scenario: A flat file upload in and ODS with field Alternate Unit of Measure (mapped to 0UNIT)
    Issue: While activating the data in the ODS (Transformation, RSDS transaction datasource), it gives an error stating 'No SID found for value 'FOZ' of characteristic 0UNIT'.
    Measures tried: Transferring Global Settings, Checking in the table 'T006', Checking the format of the flat file.
    May I please request your help for the above issue at the earliest.
    Thank you.
    Regards,
    Kunal Gandhi

    Hello,
    Same issue here.
    I've checked the entries in 0UNIT SID table: The unit "xxx" is not present.
    But this unit is present in T006 table, with no specific option.
    The update mode for Master Data on SPRO is : "Create master data if Not avalaible (Auto_sid)"
    The update mode in infopackage is "Always update data, even if no master data exist for the data"
    I've used the "Rebuild tables" mode for Units of measurement for my source system.
    The tried the simulate update for my datapackage.
    ==> No errors
    ==>deletion of package on the data target
    ==>manuel update of data from the package
    ==>change to green
    ==>check the 0UNIT SID table : The unit "xxx" is now present !
    Hope this helps

  • What is the difference between Fiscal year variant, Fiscal year and Calyear

    hi,,
    What is the difference between
    1. Fiscal Year Variant (0FISCVARNT)
    2. Fiscal Year (0FISCYEAR)
    3. Calendar Year (0CALYEAR)
    In what scenarios they should be used. If we are getting 0FISCVARNT data from ODS. Can we change into other in cube at update rules level or some other level?
    How can we get factory calendar????
    Message was edited by:
            Avneet M

    Hi Gurus
    I was going through this thread and came up some more questions in my mind.
    1. Do we need to set Fiscal Year, Fiscal Varient settings once for all in BW ? ( The path is given in one of the thread above ) need to know if this is one time setting / rebuilding the tables. what will happen when we select and execute factory calender and Fiscal Varient and Rebuild tables in transfer global setting ? Is this one time we have to do for whole BW system?
    2. when we go to SCAL t-code what we need to do and what it do in the system ? I know its for Factory calender but just want to know what it does. It it also for one time during BW build phase ?
    3. How can we link Factory calender and Fiscal year? IS it thro Fiscal varient ?
    Pl help me to clear this doubt,thanks in advance.

  • Connection Problem

    Hi Guru's,
    I am new to SDN and my query is with regard to SAP BW / SAP R/3 connection. I checked in rsa1 source system right click and check and that worked fine for r/3 source system. And also checked sm59 in both r/3 and bw and found both the logical systems defined.
    Now I started working with LO cockpit customization(lbwe screen). First maintained the extract structure for sales,  then went deleted the setup tables and activated the data source. I started filling the setup tables and I get the message no activated extract structures available or no bw connection will be available.i used oli7bw and also tried using stand tcode sbiw for filling setup tables.
    I am surprised becoz the extract structures and datasource 2lis_11_vahdr were active and green in colour. I am not able to run the job in job control. I hope i am clear. Pls tell me how to identify the basis problems and report the same to the appropriate persons. Or there is anything wrong from my side.
    I appreciate quick responses.
    Thank u veru much.

    hi Pavan,
    welcome to SDN ...
    check if helps oss note 914268 -No extraction structure active or no BW connected
    also try to verify connection in sm59 with test authorization (not only test connecion), menu test->authorization, do both in bw and r/3.
    and take a look
    problem when initializing setup tables for lo update
    hope this helps.
    914268 -No extraction structure active or no BW connected
    Symptom
    Within an instance, you use the NetWeaver components of SAP ERP and BW.
    When you execute transactions like to fill rebuild tables in Logistics, the system issues the termination message M2 630:
    No extraction structure active or no BW connected
    The connection test and the settings for the Myself source system are definitely correctly maintained.
    Other terms
    OLTP, extractor, data extraction, DataSource, API service, SAPI, RSA2_SERV_BW_CONNECTED, OLI1BW, OLIZBW, OLI3BW, OLI9BW, 2LIS_XX_XXX, M2630, SBIW, LIS, rebuild
    Reason and Prerequisites
    The problem is caused by a program error.
    Prerequisites:
    Within an instance, you are using the NetWeaver components of SAP ERP and BW. Thus, potential data loading processes are executed by using the Myself source system connection.
    No other BW system is connected for data as a target system.
    Cause:
    Prior to NetWeaver, you could not operate ERP and BW in one common system. For this reason, the case that Logistics data need to be loaded into the same system for BW functions was not considered. The system as "source system" does not recognize that the system is connected as a "BW system".
    Solution
    To correct the problem, you need Service API 7.00 Support Package 08 or PI 2004.1 4.6C Support Package 13 in the affected source system. The attached composite SAP note informs you on which software components contain Service API 7.0 and lists the corresponding Support Packages of these components.
    Alternatively, implement the attached program corrections.
    In the BW system, you only need to implement the correction if the BW system is affected in its role as a source system for itselt (myself connection) or for another BW system.
    Caution: Service API is an internal component. Therfore, it is not delivered separately but as a part of other software components: PI, CRM, PI_BASIS and several others. Thus, a composite SAP note is created for each SAPI correction note. This composite note describes which Support Packages of the above mentioned software components contain the SAPI correction. However, the composite SAP note will only be released when the corresponding SAPI Support Packages are released.

  • Fiscal year variant V3 is not maintained for calendar year 2010

    Hello Gurus,
    I tried to load the data from data source to Targer i got bellow Error Message and i checked the data in R/3 side 2010 data is available ,
    Diagnosis
    An error occurred in BI while processing the data. The error is documented in an error message.
    System Response
    A caller 01, 02 or equal to or greater than 20 contains an error message.
    Further analysis:
    The error message(s) was (were) sent by:
    Second step in the update
    Second step in the update
    Second step in the update
    Second step in the update
    Procedure
    Check the error message (pushbutton below the text).
    Select the message in the message dialog box, and look at the long text for further information.
    Follow the instructions in the message.
    1) Fiscal year variant V3 is not maintained for calendar year 2010
    2)Error when assigning SID: Action VAL_SID_CONVERT table 0FISCPER
    3)Error when assigning SID: Action VAL_SID_CONVERT table 0FISCYEAR
    4)Activation of M records from Data Store object 0FIAP_O03 terminated
    5)Process 000052 returned with errors
    6)Process 000119 returned with errors
    Please suggest.
    Thanks
    Prakash

    prakash,
    i hope 2010 fiscal variants are not mainiatined in ur bi system
    please do this operation..
    Go to RSA1-> source systems->select R/3 system -> right click->Transfer global settings->Fiscal year varaints->rebuild tables
    then try to repeat the load...the load will be fine if the calender is maintained in R/3 if not raise this issue to the functional people...

  • Mail server is too slow to deliver the mail to internal domain

    Hi,
    My mail server faster enough to send the mails to other domains, but when i try to send mail to my own domain it too slow some time it take 30 t0 40 minutes to deliver the mail.
    Please help
    Thanks,
    Gulab Pasha

    You should use statspack to check what are the main waits.
    Some indicators to check :
    - too many fts/excessive IO => check sql statements (missing index, wrong where clause)
    - explain plan for most important queries : using cbo or rbo ? If cbo, statistics should be up to date. If rbo, check acces path.
    -excessive logfile switch (> 5 per hour) : increase logfile or disable logging
    - undo waits => not enough rollback segments (if you don't set AUM)
    - data waits => alter initrans, pctfree, pctused
    - too many chaining rows => rebuild set of datas or rebuild table
    - too many levels in indexes => rebuild index
    - excessive parsing : use bind variable or alter parameter cursor_sharing
    - too many sort on disks => increase sort_area_size and create others temporary tablespace on separate disks
    - too many blocks reads for a row => db_block_size too few or too many chaining rows
    - too many lru contention => increase latches
    - OS swapping/paging ?
    Too improve performance :
    - alter and tune some parameters : optimizer_mode, sort_area_size, shared_pool_size, optimizer_index_cost_adj, db_file_multiblock_read_count...
    - keep most useful packages in memory
    - gather regularly statistics (if using cbo)
    How do your users access to the db ?
    Jean-François Léguillier
    Consultant DBA

  • Simon Greener's Morton Key Clustering in Oracle Spatial

    Hi folks,
    Apologies for the rambling.  With mattyschell heading for greener open source big apple pastures I am looking for new folks to bounce ideas and code off.  I was thinking this week about the discussion last autumn over spatial clustering.
    https://community.oracle.com/thread/3617887
    During the course of the thread we all kind of pooh-poohed spatial clustering as not much of solution, myself being one of the primary poohers.  Yet the concept certainly remains as something to consider regardless of our opinions.  The yellow book, the Greener/Ravada book, Simon's recent treatise (http://download.oracle.com/otndocs/products/spatial/pdf/biwa_2015/biwa2015_uc_comparativeperformance_greener.pdf), they all put forward clustering such that at the very least we should consider it a technique we should be able as professionals to do - a tool in the toolbox whether or not it always is the right answer.  I am mildly (very mildly) curious to see if Kothuri, Godfrind and Beinat will recycle their section on spatial clustering with the locked-down MD.HHENCODE into their 12c revision out this summer.  If they don't then what is the replacement for this technique?  If they do then we return to all of our griping about this ancient routine that Simon implies may date back to the CHS and their hhcode indexes - at least its not written in Java! 
    Anyhow, so I've been in the midst this month of refreshing some of the datasets I manage and considering clustering the larger tables whilst I am at it.  Do I really expect to see huge performance gains?   Well... not really.  But it does seem like something that should be easy to accomplish, certainly something that "doesn't hurt" and shows that I am on top of things (e.g. "checks the box").  But returning to the discussion from last fall, just what is the best way to do this in Oracle Spatial?
    So if we agree to ignore poor old MD.HHENCODE, then what?  Hilbert curves look nifty but no one seems to be stepping up with the code for them.  And this reroutes us back around to Simon and his Morton key code.
    http://www.spatialdbadvisor.com/oracle_spatial_tips_tricks/138/spatial-sorting-of-data-via-morton-key
    So who all is using Simon's code currently?  If you read that discussion from last fall there does not seem to be anyone doing so and we never heard back from Cat Person on either what he decided to do or what his name is.
    I thought I could take a stab at streamlining Simon's process somewhat to make things easier for myself to roll this onto many tables.  I put together the following small package
    https://github.com/pauldzy/DZ_SDO_CLUSTER/tree/master/Packages
    In particular I wanted to bundle up the side issues of how to convert your lines and polygons into points, automate things somewhat and provide a little verification function to see what results look like.  So again nothing that Simon does not already walk through on his webpage, just make it bit easier to bang out on your tables without writing a separate long SQL process for each one.
    So for example to use Simon's Morton key logic, you need to know the extent envelope of the data (in order to define a proper grid).  So if its a large table, you'd want to stash the envelope info in the metadata.  You can do this with the update_metadata_envelope procedure or just suffer through the sdo_aggr_mbr each time if you don't want to go that route (I have one table of small watershed polygons that takes about 9 hours to run sdo_aggr_mbr upon).  So just run things at the sql prompt
    SELECT
    DZ_SDO_CLUSTER.MORTON_UPDATE(
        p_table_name => 'CATCHMENT_NP21'
       ,p_column_name => 'SHAPE'
       ,p_grid_size => 1000
    FROM dual;
    This will return the update clause populated with the values to use with the morton_key wrapper function, e.g. "morton_key(SHAPE,160.247133275879,-17.673722530871,.0956820001136141,.0352063207508021)".  So then just paste that into an update statement
    UPDATE foo
    SET my_morton_key = dz_sdo_cluster.morton_key(
        SHAPE
       ,160.247133275879
       ,-17.673722530871
       ,.0956820001136141
       ,.0352063207508021
    Then rebuild your table sorting on the morton_key.  I just use the TOAD rebuild table tool and manually add the order by clause to the rebuild script.  I let TOAD do all the work of moving the indexes, constraints and grants to the new table.  I imagine there are other ways to do this.
    The final function is meant to be popped into Oracle mapviewer or something similar to show your family and friends the results.
    SELECT
    dz_sdo_cluster.morton_visualize(
        'NHDPLUS'
       ,'NHDFLOWLINE_NP21_ACU'
       ,'SHAPE'
       ,'OBJECTID'
       ,'100'
       ,10000
       ,'MORTON_KEY'
    FROM dual;
    Look Mom, there it is!
    So anyhow this is first stab at things and interested in feedback or suggestions for improvement.  Did I get the logic correct?  Don't spare my feelings if I botched something.  Note that like Simon I passed on the matter of just how to determine the proper grid size.  I've been using 1000 for the continental US + Hawaii/PR/VI and sitting here this morning I think that probably is too large.  Of course it depends on the size of the geometries and thus the density of the resulting points.  With water features this can vary a lot from place to place, so perhaps 1000 is okay.  What would the algorithm be to determine a decent grid size?  It occurs to me I could tell you the average feature count per morton key value, okay well its about 10.  That seems small to me.  So I could see another function in this package that returns some kind of summary on the results of the keying to tell you if your grid size estimate was reasonable.
    Cheers and Happy Saturday,
    Paul

    I've done some spatial clustering testing this week.
    Firstly, to reiterate the purpose of spatial clustering as I see it:  spatial clustering can be of benefit in situations where frequent window based spatial queries are made.  In particular it can be very useful in web mapping scenarios where a map server is requesting data using SDO_FILTER or SDO_ANYINTERACT and there is a need to return the data as quickly as possible.  If the data required to satisfy the query can be squeezed into as few blocks as possible, then the IO overhead is clearly reduced.
    As Bryan mentioned above, once the data is in the buffer cache, then the advantage of spatial clustering is reduced.  However it is not always possible to get/keep enough of the data in the buffer cache, so I believe spatial clustering still has merits, particularly if it can be implemented alongside spatial partitioning.
    I ran the tests using an 11.2.0.4 database on my laptop.  I have a hard disk rather than SSD, so the effects of excessive IO are exaggerated.  The database is configured with the default 8kb block size.
    Initially, I created a table PARCELS:
    create table parcels (
    id            integer,
    created_date  date,
    x            number,
    y            number,
    val1          varchar2(20),
    val2          varchar2(100),
    val3          varchar2(200),
    geometry      mdsys.sdo_geometry,
    hilbert_key  number);
    I inserted 2.8 million polygons into this table.  The CREATED_DATE is the actual date the polygons were captured.  I populated val1, val2 and val3 with string values to pad the rows out to simulate some business data sitting alongside the sdo_geometry.
    I set X,Y to the first ordinate of the polygon and then set hilbert_key = sdo_pc_pkg.hilbert_xy2d(power(2,31), x, y).
    I then created 4 tables to base the tests upon:
    PARCELS_RANDOM:  Ordered by dbms_random.random - an absolute worst case scenario.  Unrealistic, but worthwhile as a benchmark.
    PARCELS_BASE_DATE:  Ordered by CREATED_DATE.  This is probably pretty close to how the original source data is structured on disk.
    PARCELS_RTREE:  Ordered by RTree.  Achieved by inserting based on an SDO_FILTER query
    PARCELS_HILBERT:  Ordered by the hilbert_key attribute
    As a first test, I counted the number of blocks required to satisfy an SDO_FILTER query.  E.g.
    select count(distinct(dbms_rowid.rowid_block_number(rowid)))
    from parcels_rtree
    where sdo_filter(geometry,
                    sdo_geometry(2003, 2157, null, sdo_elem_info_array(1, 1003, 3),
                                    sdo_ordinate_array(644232,773809, 651523,780200))) = 'TRUE';
    I'm assuming dbms_rowid.rowid_block_number(rowid) is suitable for this.
    I ran this on each table and repeated it over three windows.
    Results:
    So straight off we can see that the random ordering gave pretty horrific results as the data required to satisfy the query is spread over a large number of blocks.  The natural date based clustering was far better. RTree and Hilbert based clustering reduced this by a further 50% with Hilbert just nosing out RTree.
    Since web mapping is the use case I am most likely to target, I then setup a test case as follows:
    Setup layers in GeoServer for each of the tables
    Used a script to generate 1,000 random squares over the extent of the data, ranging from 200m to 500m in width and height.
    Used JMeter to make a WMS request for a png of the each of the 1,000 windows.  JMeter was run sequentially with just one thread, so it waited for each request to complete before starting the next.  I ran these tests 3 times to balance out the results, flushing the buffer cache before each run.
    Results:
    Again the random ordering performed woefully bad - somewhat exacerbated by the quality of the disk on my laptop.  The natural date based clustering performed far better.  RTree and hilbert based clustering further reduced the time by more than half.
    In summary, the results suggest that spatial clustering is worth the effort if:
    the data is not already reasonably well clustered
    you've got a decent quantity of data
    you're expecting a lot of window based queries which need to be returned as quickly as possible
    you don’t expect to be able to fit all the data in the buffer cache
    When it comes to deciding between RTree and Hilbert (or Morton/z-order or any other space filling curve method).... I found that the RTree method can be a bit slow on large datasets, although this may not matter as a one off task.  Plus it requires a spatial index on the source table to start off with.  The key based methods are based on an xy, so for lines and polygons there is an intermediate step to extract an xy.  I would tend to recommend this approach if you also partition the data based on a subset of the cluster key.
    Scripts are available here: https://github.com/john-otoole/oracle_spatial_cluster_test
    John

Maybe you are looking for

  • Adapter Question

    So my family is looking at iMac Mini, as it sounds like a great computer. My question is what good DVI to VGA adapters there are for it. I found this on the Apple Store. http://store.apple.com/us/product/M9320G/A The question is, does it work with th

  • Regarding Jasper reports

    I am getting Run time erros when i am generating report from my JSP page how should i resolve this issue. Thanks Srikanth

  • Fix reversed text in smart forms in hebrew

    hallow i use smart forms and i write text in Hebrew and when i do display to the page before the print the text reversed . how can i fix that. regards

  • Change name in browser bar?

    Hi all, using TCS2 on Windows 7 64 bit. When I create my RH Webhelp, in the blue browser bar (doesn't matter which browser) it says (for example): Filename - Mozilla Firefox (see attached screenshot) https://website.com - Filename - IE 6 I am told by

  • Where do i put my payment info it didn't mask me when i regestered

    Where di I put my payment info it didn't ask me when i registered