Cube Validation - with Direct I/O

Hi Team,
Could you provide the information on below:
Can we perform the cube validation with Direct I/O?
What will be the impact w.r.t memory?Thanks,
SK.

Can we perform the cube validation with Direct I/O?^^^Why would you think you could not?
What will be the impact w.r.t memory?^^^Is this a question to a test? Try it. I would imagine that if the db wasn't loaded, it would load it. That would use memory. As it validated I would guess that it would fill the data file cache. That would use memory. I honestly don't know but if I were super afraid that I would eat all the memory on my server when I ran a cube validation I would try it on very small database and see what happens.
Sorry to be snippy, but for goodness' sake, this is such an easy thing to try and test. You can get the memory information directly from the server or via EAS and of course MaxL will tell you that can or cannot validate a Direct I/O database.
Regards,
Cameron Lackpour
P.S. Also, while I hardly expect to be rewarded after the above, you might think about marking your (15 as of the writing of this post) questions as answered if indeed they are and assigning points to those who have helped you.
P.P.S. 811829, I'm sorry if I am getting snippy, but I have noticed a series of open-ended questions on this board that, while the sign of someone who wants to learn stuff, also suggests that you are not trying these things out yourself. Go on and try, Essbase won't bite. :)

Similar Messages

  • Problem with direct input program while uploading data into database

    TABLES:  BGR00,                        " Mappensatz
             BMM00,                        " MM01/MM02  BTCI-Kopfdaten
             BMMH1,                        " MM01/MM02 Hauptdaten
             BMMH2,                        " Länderdaten (Steuern)
             BMMH3,                        " Prognosewerte
             BMMH4,                        " Verbrauchswerte
             BMMH5,                        " Kurztexte
             BMMH6,                        " Mengeneinheiten
             BMMH7,                        " Langtexte
             BMMH8.                        " Referentielle EAN's
           Satztypen
    DATA:    MAPPENSATZ  LIKE BMM00-STYPE VALUE '0',
             KOPFSATZ    LIKE BMM00-STYPE VALUE '1',
             HAUPTSATZ   LIKE BMM00-STYPE VALUE '2',
             KUN_SATZ    LIKE BMM00-STYPE VALUE 'Z',
             LANDSATZ    LIKE BMM00-STYPE VALUE '3',
             PROGSATZ    LIKE BMM00-STYPE VALUE '4',
             VERBSATZ    LIKE BMM00-STYPE VALUE '5',
             KTEXTSATZ   LIKE BMM00-STYPE VALUE '6',
             MESATZ      LIKE BMM00-STYPE VALUE '7',
             TEXTSATZ    LIKE BMM00-STYPE VALUE '8',
             EANSATZ     LIKE BMM00-STYPE VALUE '9'.
    Common Data Bereich fuer die extern aufgerufenen Routinen
           Initialstrukturen
    DATA:  BEGIN OF COMMON PART RMMMBIMY.
    DATA:    BEGIN OF I_BMM00.
               INCLUDE STRUCTURE BMM00.    " Kopfdaten
    DATA:    END OF I_BMM00.
    DATA:    BEGIN OF I_BMMH1.
               INCLUDE STRUCTURE BMMH1.    " Haupdaten
    DATA:    END OF I_BMMH1.
    DATA:    BEGIN OF I_BMMH2.
               INCLUDE STRUCTURE BMMH2.    " Länderdaten
    DATA:    END OF I_BMMH2.
    DATA:    BEGIN OF I_BMMH3.
               INCLUDE STRUCTURE BMMH3.    " Prognosewerte
    DATA:    END OF I_BMMH3.
    DATA:    BEGIN OF I_BMMH4.
               INCLUDE STRUCTURE BMMH4.    " Verbrauchswerte
    DATA:    END OF I_BMMH4.
    DATA:    BEGIN OF I_BMMH5.
               INCLUDE STRUCTURE BMMH5.    " Kurztexte
    DATA:    END OF I_BMMH5.
    DATA:    BEGIN OF I_BMMH6.
               INCLUDE STRUCTURE BMMH6.    " Mengeneinheiten
    DATA:    END OF I_BMMH6.
    DATA:    BEGIN OF I_BMMH7.
               INCLUDE STRUCTURE BMMH7.    " Textzeilen
    DATA:    END OF I_BMMH7.
    DATA:    BEGIN OF I_BMMH8.
               INCLUDE STRUCTURE BMMH8.    " Referentielle EAN's
    DATA:    END OF I_BMMH8.
    DATA:  END OF COMMON PART.
    DATA: WA LIKE TEDATA-DATA.
           Einzelfelder
    DATA:    GROUP_COUNT(6) TYPE C,    " Anzahl Mappen
             TRANS_COUNT(6) TYPE C,    " alte Definition für rmmmbim0
             SATZ_COUNT  LIKE MUEB_REST-TRANC, " Trans.zähler neu
             H_IND_COUNT LIKE MUEB_REST-D_IND, " Index welches Feld zurücks.
             SATZ2_COUNT(6) TYPE C.    " Anz. Sätze je Trans. ohne Kopfsatz
    DATA:    XEOF(1)          TYPE C,  " X=End of File erreicht
             XHAUPTSATZ_EXIST TYPE C,  " X=Hauptsatz zum Kopf exi.
             NODATA(1)        TYPE C.  " kein BI für dieses Feld
    mk/15.08.94:
    DATA:    GROUP_OPEN(1)  TYPE C.             " X=Mappe schon geöffnet
    *eject
           Konstanten
    DATA:    C_NODATA(1)    TYPE C VALUE '/'.   " Default für NODATA
    DATA:    MATNR_ERW     LIKE MARA-MATNR  VALUE '0                 '.
    DATA:    MATNR_ERW_INT LIKE MARA-MATNR.  "internal sight of '0      '
    DATA:    MATNR_LAST    LIKE MARA-MATNR.  "Material number
    mk/11.08.94 2.1H:
    If this flag is initial, the database updates will be done directly
    during background maintenance instead of using a separate update
    task. (no usage of this flag in dialogue mode!)
    DATA: DBUPDATE_VB(1) VALUE ' '.       "note 306628
    data: matsync type mat_sync. "wk/99a no update in dialog if called
    ***INCLUDE ZMUSD070.
    TABLES: MARA,                          "Material Master: General Data
            MARC,                          "Material Master: C Segment
            MARD,                          "Material Master: St Loc/Batch
            MBEW,                          "Material Valuation
            MVKE,                          "Material Master: Sales Data
            MLGN,                          "Material Data per Whse Number
            MLAN,                          "Tax Classification: Material
            T001W,                         "Plants/Branches
            TBICU.
    DATA: BEGIN OF VALUTAB OCCURS 0.
            INCLUDE STRUCTURE RSPARAMS.
    DATA: END OF VALUTAB.
    DATA: BEGIN OF VARTECH.
            INCLUDE STRUCTURE VARID.
    DATA: END OF VARTECH.
    DATA: PARMS LIKE ZXXDCONV.
    DATA: REC_COUNT      TYPE  I,
          REC_COUNT_BAD  TYPE  I,
          ZJOBID         LIKE  TBIZU-JOBID,
          ZJOBCOUNT      LIKE  TBIZU-JOBCOUNT,
          ZMATNR         LIKE  MARA-MATNR,
          ZTEXT(80)      TYPE  C.
    CONSTANTS: LIT_ZERO(18)  TYPE  C            VALUE '000000000000000000',
               LIT_CHAR      TYPE  C            VALUE '_',
               LIT_CREATE    LIKE  BMM00-TCODE  VALUE 'MM01',
               LIT_CHANGE    LIKE  BMM00-TCODE  VALUE 'MM02',
               LIT_CHECK(1)  TYPE  C            VALUE 'X'.
    DATA:  BEGIN OF INP_DATA OCCURS 0,
             MATNR(18)  TYPE C,            " Material code
             UMREN(6)   TYPE C,            " Denominator
             MEINH(3)   TYPE C,            " Alternate UOM
             UMREZ(6)   TYPE C,            " Numerator
           END OF INP_DATA.
    *eject
    SELECTION-SCREEN BEGIN OF BLOCK INOUT WITH FRAME TITLE TEXT-001.
    SELECTION-SCREEN BEGIN OF LINE.
    SELECTION-SCREEN COMMENT (13) TEXT-004.
    PARAMETERS:     P_PC        RADIOBUTTON GROUP SRC DEFAULT 'X'.
    SELECTION-SCREEN COMMENT (6) TEXT-005.
    PARAMETERS:     P_UNIX      RADIOBUTTON GROUP SRC.
    SELECTION-SCREEN COMMENT (6) TEXT-006.
    PARAMETERS:     P_DS_TYP    LIKE     ZXXDCONV-DS_TYP
                                   DEFAULT 'ASC'.
    SELECTION-SCREEN END OF LINE.
    *SELECT-OPTIONS: S_PATH      FOR      PARMS-PATH
                                  NO INTERVALS
                                  LOWER CASE.
    PARAMETERS:  P_PATH TYPE RLGRAP-FILENAME.
    PARAMETERS:     P_HDRLIN   LIKE     ZXXDCONV-HDR_LINES
                                   DEFAULT 0,
                    P_JOBNAM   LIKE     TBICU_S-JOBNAME
                                   MEMORY ID BM1,
                    P_DI_EXE    AS       CHECKBOX
                                   DEFAULT  LIT_CHECK,
                    P_MAPPE     LIKE     BGR00-GROUP
                                   DEFAULT  'MRP_UOM_LOAD'
                                   NO-DISPLAY.
    SELECTION-SCREEN END OF BLOCK INOUT.
    *eject
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR P_PATH.
      CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
           EXPORTING
                PROGRAM_NAME  = SYST-REPID
                DYNPRO_NUMBER = SYST-DYNNR
                FIELD_NAME    = 'P_PATH'
           CHANGING
               FILE_NAME     = S_PATH-LOW
                FILE_NAME     = P_PATH
           EXCEPTIONS
                MASK_TOO_LONG = 1
                OTHERS        = 2.
    AT SELECTION-SCREEN.
    Set up parameter record
      PARMS-UNIX      = P_UNIX.
      PARMS-PC        = P_PC.
      PARMS-DS_TYP    = P_DS_TYP.
      PARMS-JOBNAME   = P_JOBNAM.
      PARMS-MAPPE     = P_MAPPE.
      PARMS-HDR_LINES = P_HDRLIN.
    *eject
           Main Processing Routine                                       *
    START-OF-SELECTION.
    Initialization
      PERFORM 0000_HOUSEKEEPING.
    Initialize transaction data in I_BM00
    PERFORM 0500_INIT_BMM00.
    Process input files
    SORT S_PATH BY SIGN OPTION LOW.
         MOVE S_PATH-LOW TO PARMS-PATH.
          MOVE P_PATH TO PARMS-PATH.
    LOOP AT S_PATH.
       AT NEW LOW.
          CLEAR   INP_DATA.
         REFRESH INP_DATA.
    Read source data into internal table
          PERFORM 1000_GET_SOURCE_DATA TABLES INP_DATA.
    Processs each record in internal table
          ZTEXT    = TEXT-007.
          ZTEXT+13 = PARMS-DS_NAME.
          PERFORM 4000_PROGRESS_INDICATOR USING ZTEXT.
    Initialize transaction data in I_BM00
      PERFORM 0500_INIT_BMM00.
          LOOP AT INP_DATA.
    Reset tables for each record
            BMM00              = I_BMM00.
            BMMH1              = I_BMMH1.
            BMMH6              = I_BMMH6.
    Load structures with data
            MOVE-CORRESPONDING INP_DATA TO BMM00.
            PERFORM 2000_WRITE_OUTPUT USING BMM00.
            MOVE-CORRESPONDING INP_DATA TO BMMH1.
            PERFORM 2000_WRITE_OUTPUT USING BMMH1.
            MOVE-CORRESPONDING INP_DATA TO BMMH6.
            PERFORM 2000_WRITE_OUTPUT USING BMMH6.
            REC_COUNT = REC_COUNT + 1.
          ENDLOOP.
       ENDAT.
    ENDLOOP.
      IF  REC_COUNT GT 0
      AND P_DI_EXE  EQ LIT_CHECK.
        PERFORM 3000_START_DI_JOB.
      ENDIF.
    WRITE: / TEXT-008,
               REC_COUNT.
      PERFORM 9000_END_OF_JOB.
    *eject
    Include containing common routines used by direct input programs
      INCLUDE ZMUSD071.
    *eject
          FORM 0500_INIT_BMM00                                          *
          Initialize I_BMM00 with transaction code and views selected   *
    FORM 0500_INIT_BMM00.
    ***this changes done by samson**
    if not inp_data[] is initial.
    select single matnr from mara INTO ZMATNR where matnr = inp_data-matnr.
    if sy-subrc = 0.
      I_BMM00-TCODE = LIT_CHANGE.
    Basic data
      I_BMM00-XEIK1  = LIT_CHECK.
    else.
      I_BMM00-TCODE = LIT_CREATE.
    Basic data
      I_BMM00-XEIK1 = LIT_CHECK.
    endif.
    endif.
    **this changes above done by samson**
    Transaction code
    I_BMM00-TCODE = LIT_CHANGE.
    Basic data
    I_BMM00-XEIK1  = LIT_CHECK.
    ENDFORM.
    INCLUDE ZMUSD069.
    *eject
          FORM 0000_HOUSEKEEPING                                        *
          Initialization routines                                       *
    FORM 0000_HOUSEKEEPING.
      PERFORM 0010_LDS_NAME.
      PERFORM 0020_DS_NAME.
      PERFORM 0030_OPEN_FILE.
      PERFORM 0040_INIT_STRUCTS.
    ENDFORM.
    *eject
          FORM 0010_LDS_NAME                                            *
          Obtain logical file name from DI job details                  *
    FORM 0010_LDS_NAME.
    Check valid job name
      SELECT SINGLE * FROM  TBICU
                      WHERE JOBNAME EQ PARMS-JOBNAME.
      IF SY-SUBRC EQ 0.
        CALL FUNCTION 'RS_VARIANT_VALUES_TECH_DATA'
             EXPORTING
                  REPORT               = TBICU-REPNAME
                  VARIANT              = TBICU-VARIANT
             IMPORTING
                  TECHN_DATA           = VARTECH
             TABLES
                  VARIANT_VALUES       = VALUTAB
             EXCEPTIONS
                  VARIANT_NON_EXISTENT = 1
                  VARIANT_OBSOLETE     = 2
                  OTHERS               = 3.
        IF SY-SUBRC EQ 0.
          READ TABLE VALUTAB WITH KEY 'LDS_NAME'.
          MOVE VALUTAB-LOW TO PARMS-LDS_NAME.
        ELSE.
          MESSAGE I001 WITH PARMS-JOBNAME.
          MESSAGE A099.
        ENDIF.
      ELSE.
        MESSAGE I000 WITH PARMS-JOBNAME.
        MESSAGE A099.
      ENDIF.
    ENDFORM.
    *eject
          FORM 0040_INIT_STRUCTS                                        *
          Initialize structures for direct input records                *
    FORM 0040_INIT_STRUCTS.
    Start of standard SAP initialization from example program RMMMBIME
    *------- Write session record -
      CLEAR BGR00.
      BGR00-STYPE  = MAPPENSATZ.
      BGR00-GROUP  = PARMS-MAPPE.
      BGR00-NODATA = C_NODATA.
      BGR00-MANDT  = SY-MANDT.
      BGR00-USNAM  = SY-UNAME.
      BGR00-START  = BGR00-NODATA.
      BGR00-XKEEP  = BGR00-NODATA.
      PERFORM 2000_WRITE_OUTPUT USING BGR00.
    *----- Initialize structures -
      NODATA = BGR00-NODATA.
      PERFORM INIT_STRUKTUREN_ERZEUGEN(RMMMBIMI) USING NODATA.
    End of standard SAP initialization from example program RMMMBIME
    ENDFORM.
    *eject.
          FORM 3000_START_DI_JOB                                        *
          Start direct input job                                        *
    FORM 3000_START_DI_JOB.
      ZTEXT = 'Starting '(021).
      ZTEXT+9 = TBICU-JOBNAME.
      PERFORM 4000_PROGRESS_INDICATOR USING ZTEXT.
      CALL FUNCTION 'BI_START_JOB'
           EXPORTING
                JOBID                 = ' '
                JOBTEXT               = TBICU-JOBNAME
                REPNAME               = TBICU-REPNAME
                SERVER                = TBICU-EXECSERVER
                VARIANT               = TBICU-VARIANT
                NEW_JOB               = 'X'
                CONTINUE_JOB          = ' '
                START_IMMEDIATE       = 'X'
                DO_NOT_PRINT          = 'X'
                USERNAME              = SY-UNAME
           IMPORTING
                JOBID                 = ZJOBID
                JOBCOUNT              = ZJOBCOUNT
           EXCEPTIONS
                JOB_OPEN_FAILED       = 1
                JOB_CLOSE_FAILED      = 2
                JOB_SUBMIT_FAILED     = 3
                WRONG_PARAMETERS      = 4
                JOB_DOES_NOT_EXIST    = 5
                WRONG_STARTTIME_GIVEN = 6
                JOB_NOT_RELEASED      = 7
                WRONG_VARIANT         = 8
                NO_AUTHORITY          = 9
                DIALOG_CANCELLED      = 10
                JOB_ALREADY_EXISTS    = 11
                PERIODIC_NOT_ALLOWED  = 12
                ERROR_NUMBER_GET_NEXT = 13
                OTHERS                = 14.
      IF SY-SUBRC EQ 0.
        WRITE: / 'Direct input job'(022), TBICU-JOBNAME, 'started'.
      ELSE.
        WRITE: / 'Direct input failed with return code'(023), SY-SUBRC.
      ENDIF.
    FORM 0020_DS_NAME.
      CALL FUNCTION 'FILE_GET_NAME'
           EXPORTING
                CLIENT           = SY-MANDT
                LOGICAL_FILENAME = PARMS-LDS_NAME
                OPERATING_SYSTEM = SY-OPSYS
           IMPORTING
                FILE_NAME        = PARMS-DS_NAME
           EXCEPTIONS
                FILE_NOT_FOUND   = 1
                OTHERS           = 2.
      IF SY-SUBRC NE 0.
        MESSAGE E002 WITH PARMS-LDS_NAME.
        MESSAGE A099.
      ENDIF.
    ENDFORM.
    *eject
          FORM 0030_OPEN_FILE                                           *
          Open physical file for output                                 *
    FORM 0030_OPEN_FILE.
    OPEN DATASET PARMS-DS_NAME FOR OUTPUT IN TEXT MODE. "thg191105
      OPEN DATASET PARMS-DS_NAME FOR OUTPUT IN TEXT MODE
                                     encoding default. "thg191105
      IF SY-SUBRC NE 0.
        MESSAGE E003 WITH PARMS-DS_NAME.
        MESSAGE A099.
      ENDIF.
    ENDFORM.
    *eject
          FORM 1000_GET_SOURCE_DATA                                     *
          Read source data into internal table                          *
    -->  INP_DATA   " Name of internal table passed as parameter       *
    FORM 1000_GET_SOURCE_DATA TABLES INP_DATA.
      CALL FUNCTION 'Z_FILE_UPLOAD'
           EXPORTING
                UNIX                = PARMS-UNIX
                PC                  = PARMS-PC
                FILETYPE            = PARMS-DS_TYP
                FILENAME            = PARMS-PATH
                HDR_LINES           = PARMS-HDR_LINES
           TABLES
                DATA_TAB            = INP_DATA
           EXCEPTIONS
                CONVERSION_ERROR    = 1
                FILE_OPEN_ERROR     = 2
                FILE_READ_ERROR     = 3
                INVALID_TABLE_WIDTH = 4
                INVALID_TYPE        = 5
                NO_BATCH            = 6
                UNKNOWN_ERROR       = 7
                INVALID_SOURCE      = 8
                OTHERS              = 9.
    ENDFORM.
    *eject
          FORM 2000_WRITE_OUTPUT                                        *
          Write record in standard SAP structure to UNIX file           *
    -->  I_STRUCT   " Name of record passed as parameter               *
    *FORM 2000_WRITE_OUTPUT USING I_STRUCT."SRY28NOV05
    FORM 2000_WRITE_OUTPUT USING I_STRUCT TYPE ANY.      "SRY28NOV05
       TRANSFER I_STRUCT TO PARMS-DS_NAME.
      IF SY-SUBRC NE 0.
        MESSAGE E004 WITH PARMS-DS_NAME.
        MESSAGE A099.
      ENDIF.
    ENDFORM.
    *eject
    *&      Form  2100_WS_DOWNLOAD
          text                                                           *
    -->  p1        text
    <--  p2        text
    FORM 2100_WS_DOWNLOAD TABLES INP_DATA.
    DATA: FILENAME LIKE RLGRAP-FILENAME.   "SRY28NOV05
      DATA: W_FILENAME TYPE STRING.             "SRY28NOV05
      DATA: W_FTYP(10) TYPE C VALUE 'DAT'.      "SRY28NOV05
    MOVE PARMS-DS_NAME TO FILENAME.       "SRY28NOV05
      MOVE PARMS-DS_NAME TO W_FILENAME.      "SRY28NOV05
    *BEGIN OF BLOCK COMMENT BY SRY28NOV05
    CALL FUNCTION 'WS_DOWNLOAD'
          EXPORTING
            BIN_FILESIZE        = ' '
            CODEPAGE            = ' '
               FILENAME            = FILENAME
               FILETYPE            = 'DAT'
            MODE                = ' '
            WK1_N_FORMAT        = ' '
            WK1_N_SIZE          = ' '
            WK1_T_FORMAT        = ' '
            WK1_T_SIZE          = ' '
            COL_SELECT          = ' '
            COL_SELECTMASK      = ' '
       importing
            filelength          =
          TABLES
               DATA_TAB            = INP_DATA
            FIELDNAMES          =
          EXCEPTIONS
               FILE_OPEN_ERROR     = 1
               FILE_WRITE_ERROR    = 2
               INVALID_FILESIZE    = 3
               INVALID_TABLE_WIDTH = 4
               INVALID_TYPE        = 5
               NO_BATCH            = 6
               UNKNOWN_ERROR       = 7
               OTHERS              = 8.
    *END OF BLOCK COMMENT BY SRY28NOV05
    *BEGIN OF BLOCK ADDED BY SRY28NOV05
      CALL FUNCTION 'GUI_DOWNLOAD'
        EXPORTING
          FILENAME                        = W_FILENAME
          FILETYPE                        = W_FTYP
        TABLES
          DATA_TAB                        = INP_DATA
       EXCEPTIONS
         FILE_WRITE_ERROR                = 1
         NO_BATCH                        = 2
         GUI_REFUSE_FILETRANSFER         = 3
         INVALID_TYPE                    = 4
         NO_AUTHORITY                    = 5
         UNKNOWN_ERROR                   = 6
         HEADER_NOT_ALLOWED              = 7
         SEPARATOR_NOT_ALLOWED           = 8
         FILESIZE_NOT_ALLOWED            = 9
         HEADER_TOO_LONG                 = 10
         DP_ERROR_CREATE                 = 11
         DP_ERROR_SEND                   = 12
         DP_ERROR_WRITE                  = 13
         UNKNOWN_DP_ERROR                = 14
         ACCESS_DENIED                   = 15
         DP_OUT_OF_MEMORY                = 16
         DISK_FULL                       = 17
         DP_TIMEOUT                      = 18
         FILE_NOT_FOUND                  = 19
         DATAPROVIDER_EXCEPTION          = 20
         CONTROL_FLUSH_ERROR             = 21
         OTHERS                          = 22.
      IF SY-SUBRC NE 0.
         MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
               WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
    *END OF BLOCK ADDED BY SRY28NOV05
    ENDFORM.                               " 2100_WS_DOWNLOAD
    *eject
          FORM 4000_PROGRESS_INDICATOR                                  *
          Write progress text to status bar                             *
    -->  TEXT   " Text passed as parameter                             *
    FORM 4000_PROGRESS_INDICATOR USING TEXT.
      CALL FUNCTION 'SAPGUI_PROGRESS_INDICATOR'
           EXPORTING
                PERCENTAGE = 0
                TEXT       = TEXT
           EXCEPTIONS
                OTHERS     = 1.
    ENDFORM.
    *eject.
          FORM 9000_END_OF_JOB                                          *
          Close files on UNIX                                           *
    FORM 9000_END_OF_JOB.
      CLOSE DATASET PARMS-DS_NAME.
    ENDFORM.
    FORM 1000_GET_SOURCE_DATA TABLES INP_DATA.
      CALL FUNCTION 'Z_FILE_UPLOAD'
           EXPORTING
                UNIX                = PARMS-UNIX
                PC                  = PARMS-PC
                FILETYPE            = PARMS-DS_TYP
                FILENAME            = PARMS-PATH
                HDR_LINES           = PARMS-HDR_LINES
           TABLES
                DATA_TAB            = INP_DATA
           EXCEPTIONS
                CONVERSION_ERROR    = 1
                FILE_OPEN_ERROR     = 2
                FILE_READ_ERROR     = 3
                INVALID_TABLE_WIDTH = 4
                INVALID_TYPE        = 5
                NO_BATCH            = 6
                UNKNOWN_ERROR       = 7
                INVALID_SOURCE      = 8
                OTHERS              = 9.
    ENDFORM.
    *eject
          FORM 2000_WRITE_OUTPUT                                        *
          Write record in standard SAP structure to UNIX file           *
    -->  I_STRUCT   " Name of record passed as parameter               *
    *FORM 2000_WRITE_OUTPUT USING I_STRUCT."SRY28NOV05
    FORM 2000_WRITE_OUTPUT USING I_STRUCT TYPE ANY.      "SRY28NOV05
       TRANSFER I_STRUCT TO PARMS-DS_NAME.
      IF SY-SUBRC NE 0.
        MESSAGE E004 WITH PARMS-DS_NAME.
        MESSAGE A099.
      ENDIF.
    ENDFORM.
    *eject
    *&      Form  2100_WS_DOWNLOAD
          text                                                           *
    -->  p1        text
    <--  p2        text
    FORM 2100_WS_DOWNLOAD TABLES INP_DATA.

    Hi,
    Thnaks for your reply, This is my requirement.
    Here my problem is i am trying to upload the data from flatfile which contain materil number, denominator, Actual UOM, Nominator field values.
    Which is the data i need to upload into MM02 and MM01, if material number is new then it has to create the material, if material is already existing it has to update the UOM values.
    here i am getting data into my internal table INP_DATA, from that i am trying to upload the data to database by using job name MRP_MATERIAL_MASTER_DATA_UPLOAD with direct input program RMDATIND.
    when i execute my program i am getting success message all the records writtin from flatfile to application server. and job started message.
    then if i go into sm37 screen there i execute the job it is also giving active message. if i refresh it it is showing job completed message.
    then i look at job log status. there i found that for existing material it is expecting material type, for new material it is giving some gravity error.
    So could u help me in this it will be gr8.
    Thanks & Regards,
    RamNV

  • Schema Validation with Java and Document object

    Hi, I am working on a project that will validate an xml Document object from a child class. I seem to have completed XSD validation with new DOMSource(doc) and using schema factory, which doesn't support DTD validation. And now I am attempting to validate the DTD. It seems like all the examples I've seen are trying to parse files, and I am struggling with finding a way to validate it directly against the document object.
    The child class creates the xml document which is passed back to the parent class, and I need to do the validation in the parent class, so I need away to validate against the document object. This will be an assertion in Fitnesse if anyone is familar with it.
    private void doSchemaValidationCommand(int rowCount, Document doc, String xpathExpression)
             String validationPath = getText(rowCount,1);
             String extensionSeparator = ".";
             Boolean error = false;
             String ext = null;
             try
             int dot = validationPath.lastIndexOf(extensionSeparator);
            ext = validationPath.substring(dot + 1);
             catch(Exception e)
                  this.wrong(rowCount, 1, e.getMessage());
             if (ext.equalsIgnoreCase("xsd"))
             try {
             SchemaFactory factory =
                SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
            File schemaLocation = new File(validationPath);
            Schema schema = factory.newSchema(schemaLocation);
            Validator validator = schema.newValidator();
            try {
                validator.validate(new DOMSource(doc));
            catch (SAXException ex) {
                this.wrong(rowCount, 1, ex.getMessage());
                error = true;
            catch (IOException ex)  {
                 this.wrong(rowCount, 1, ex.getMessage());
                 error = true;
           catch (SAXException ex) {
               this.wrong(rowCount, 1, ex.getMessage());
               error = true;
           catch (Exception ex){
                this.wrong(rowCount, 1, ex.getMessage());
                error = true;
             else if (ext.equalsIgnoreCase("dtd"))
                  try {
             }

    I tried this but it doesn't give me any validation of whether or not it was completed successfully, it just adds a tag for the dtd to the xml and doesn't seem to do anything further with it, am I missing any steps?
    else if (ext.equalsIgnoreCase("dtd"))
                  try {
                       DOMSource source = new DOMSource(doc);
                       StreamResult result = new StreamResult(System.out);
                       TransformerFactory tf = TransformerFactory.newInstance();
                       Transformer transformer = tf.newTransformer();
                       transformer.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM, validationPath);
                       transformer.transform(source, result);
                  catch(Exception e)
                       this.wrong(rowCount, 1, e.getMessage());
                       error = true;
             }Edited by: sarcasteak on Apr 13, 2009 1:33 PM

  • Cube failed with invalid character stics

    Hi,
    My cube failed with invalid charactersticts....data flow is source sytem to ODS and then cube.
    loading in to ODS and activation is also successful . I would like to know the reason why ODS loading & activation is not failed with above said message and why cube is failed...
    Thanks in advance.....CK

    Hi,
    BW accepts just capital letters and certain characters. The permitted characters list can be seen via transaction RSKC.
    There are several ways to solve this problem:
    1)     Removing erroneous character from R/3 (for example required vendor number that need to be changed can be found from PSA from line shown in error message)
    2)     Changing or removing character in update rules (need to done by ABAP)
    3)     Putting character to BW permitted characters, if character is really needed in BW
    4)     If the bad character only happens once then it can be directly change/removed by editing the PSA
    5)     Put ALL_CAPITAL in permitted characters. Needs to be tested first!
    Editing and updating from PSA, first ensure that the load has been loaded in PSA, then delete the request from the data target, edit PSA by double clicking the field you wish to change and save. Do not mark the line and press change this will result in incorrect data. After you have corrected the PSA, right click on the not yet loaded PSA and choose u201Cstart immediately.u201D
    Hope it will help you.
    Regards,

  • Validation with data file failed

    Hello All
    I have loaded the Currency Exchange Rates from BW cube into Rate Model in BPC upto March 2014 and then our BW cube get updated the Exchange Rates for April 2014.
    Now I am trying to load the Exchange Rates for April 2014 only into Rate model and for that I am validating the Transformation File but it was failed. It shows 'VALIDATION WITH DATA FILE FAILED'.
    Rejected list option showing blank screen.
    With the same Transformation file I have loaded the data upto March 2014 successfully.
    I have compressed the source cube, reactivate the source cube after all am unable to validate the Transformation file. Below is my transformation file screen shot.
    Please let me know if you need any more details from my side.
    Thanku you all in advance.
    Please help
    Regards
    Srikant

    Hi Shaik
    Thanks for your reply.
    I have validate the Transformaiton file by given vlaue in MAXREJECTCOUNT= 10000 but there is no chage same error is coming. 'Validation with data file failed' without any errors.
    Thanks
    Srikant

  • Master data attributes with direct update...its very urgent

    Hi all,
    Could anyone tell me how to laod the master data attributes with direct update in the infopackge..
    provide steps to create master data attributes and how to load..
    Thanks,
    Manjula

    Hi Manjula,
    Flexible Uploading
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( Transaction data )
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to create ODS( Data store object ) or Cube.
    • Specify name fro the ODS or cube and click create
    • From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
    • Click Activate.
    • Right click on ODS or Cube and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used
    honor with points if this helps,
    Sudhakar

  • Cube compression WITH zero elimination option

    We have tested turning on the switch to perform "zero elimination" when a cube is compressed.  We have tested this with an older cube with lots of data in E table already, and also a new cube with the first compression.  In both cases, at the oracle level we still found records where all of the key figures = zero.  To us, this option did not seem to work. What are we missing? We are on Oracle 9.2.0.7.0 and BW 3.5 SP 17
    Thanks, Peggy

    Haven't looked at ZERO Elimination in detail in the latest releases to see if there have been changes, but here's my understanding based on the last time I dug into it -
    When you run compression with zero elimination, the process first excluded any individual F fact table rows with all KFs = 0, then if any of the summarized F fact table rows had all KF  = 0, that row was excluded ( you could have two facts with amounts that net to 0 in the same request or different requests where all other Dims IDs are equal) and not written to the E fact table.  Then if an E fact table row was updated as a result of a new F fact table row being being merged in, the process checked to see if the updated row had all KF values = 0, and if so, deleted that updated row from the E fact table.
    I don't beleive the compression process has ever gone thru and read all existing E fact table rows and deleted ones where all KFs = 0. 
    Hope that made sense.  We use Oracle, and it is possible that SAP has done some things differently on different DBs.  Its also possible, that the fiddling SAP had done over that last few years trying to use Oracle's MERGE functionality at different SP levels comes into play.
    Suggestions -
    I'm assuming that teh E fact table holds a significant percentage of rows have all KFs = 0.  If it doesn't, it's not worth pursuing.
    Contact SAP, perhaps they have a standalone pgm that deletes E fact table rows where all KFs = 0.  It could be a nice tool to have.
    If they don't have one, consider writing your own pgm that deletes the rows in question.  You'll need to keep downstream impacts in mind, e.g. aggregates (would need to be refilled - probably not a big deal), and InfoProviders that receive data from this cube.
    Another option would be to clone the cube, datamart the data to the new cube.  Once in the new cube, compress with zero elimination - this should get rid of all your 0 KF rows.  Then delete the contents of the original cube and datamart the cloned cube data back to the original cube. 
    You might be able to accomplish this same process by datamarting the orig cube's data to itself which might save some hoop jumping. Then you would have to run a selective deletion to get rid of the orig data, or perhaps if the datamarted data went thru the PSA, you could just delete all the orig data from the cube, then load datamarted data from the PSA.  Once the new request is loaded, compress with zero elimination.
    Now if you happen to have built all your reporting on the this cube to be from a MultiProvider on teh cube rather than directly form the this cube, you could just create anew cube, export the data to it, then swap the old and new cubes in the MultiProvider.  This is one of the benefits of always using a MultiProvider on top of a cube for reporting (an SAP and consultant recommended practice) - you can literally swap underlying cubes with no impact to the user base.

  • Have an issue regarding library books. My ereader is validated with Kobo account and Adobe Digital Editions account, but I get an error: 'this document is protected bij DRM and isn't available with your Adobe ID'.

    Have an issue regarding library books. My ereader is validated with Kobo account and Adobe Digital Editions account, but I get an error: 'this document is protected bij DRM and isn't available with your Adobe ID'.

    same problem for me. I am using abe edition 3 as I don't think 4 can be used with kobo. Book has been downloaded to kobo but it can't be read as it is not authorised.Help please

  • Problem with direction in a page(??????)

    Hi every body,
    I have a problem with direction in some page when i change the language to Arabic(right to left language).
    in some page like welcome page its completely true ,but in some page that i buile with my self (and add some portlet to it like :advanced search portlet)its directon does not appear properly,the label is appear in left, and text box appear in right(its not true).
    what can i do??????????????????
    please answer me :(

    This is not the right forum for your question. Try to post it in the Oracle Application Server Portal forum, or contact Oracle Support.
    Peter

  • Two Cube dimensions with the same name.

    Hi
    If I have a cube dimension that has the same name as another cube dimension, will this be an issue when quering the cube through 1) Excel 2) SSRS or Cognos Reports?
    For instance: The Database dimension DimOrganization exists in cube A as well as
    DimOrganization_Dep. Both dimension has the same attribute names and the same hierarchy names.
    Their database names are different. (object_ID and Object_name differs) 
    In the cube I change the name of DimOrganization_Dep to DimOrganization.
    This seem to work when I query the cube in Excel, the mdx looks like this;
    SELECT NON EMPTY { [Measures].[AntalNotNull] } ON COLUMNS, NON EMPTY { ([Dim Organization].[Yes- No].[ Yes- No].ALLMEMBERS * [Dim Organization].[Yes- No].[Yes- No].ALLMEMBERS ) } DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME
    ON ROWS FROM [TEST] CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGS
    Anyone knows if this kind of setup will result in problems when quering from SSRS reports or Cognos?
    And/or if even though I get it right in the cube (quering from Excel) it might as well give me the wrong result the next time?
    Cheers
    /Martin

    Hi
    Thanks for the reply.
    I have tried and yes I can change then cube dimension namn to that of another cube dimension.
    For instance if i have Cube Dimension "DimA" and "DimB" to begin with , then  I change the name of the cube dimension "DimB" to "DimA". So now I have two Cube dimensions with the same name.
    It works to deploy and process and Excel somehow understands which one is which.
    Both is present in the same cube as you can see from the MDX above.
    The tricky part is how does the mdx return the correct result when the mdx looks like this (and it dpoes return the correct result).?
    Cheers
    /Martin

  • TS1629 How do I use home sharing with direct TV / Verizon internet. I have turned on Home sharing on apple tv and in iTunes but can't see the library in apple tv. I can see the photo stream but thats it. Any Ideas????

    How do I use home sharing with direct TV / Verizon internet. I have turned on Home sharing on apple tv and in iTunes but can't see the library in apple tv. I can see the photo stream but thats it. Any Ideas????

    go to home-sharing on Apple TV and type in your info as ask.
    hope this help

  • Hello, i use premiere pro cc and i would like to uplaod my sequence in speedgrade with direct link but it's not available. I can see the option but it's not clickable, somebody knows why ?

    Hello, i use premiere pro cc and i would like to upload my sequence in speedgrade with direct link but it's not available. I can see the option but it's not clickable, somebody knows why ?

    Oh i'm not sure they are on same, i downloaded speedgrade this morning and i didn't upgraded premiere so it's maybe this ! thanks i will try it and let you know

  • HT1277 I can send & receive email from my ipad but for some reason I can no longer do this from my i mac.It keeps telling me that the password is not valid with the imap server.I keep entering the password but it won't accept it.

    I can send & receive email from my ipad but for some reason I can no longer do this from my i mac.It keeps telling me that the password is not valid with the imap server.I keep entering the password but it won't accept it.

    Do you have one, two or more entries in the left colum of Mail.app for your mail server(s)?  That is, do you have your Mail.app set up with either a btinternet entry, with a btyahoo entry, or both?  Or more?
    I'm guessing that you might have one account (btyahoo?) listed for incoming (IMAP server) mail, and with the outbound (SMTP server) mail is configured and named btinternet.
    Based on what little I see posted, it looks like BT uses both btinternet and btyahoo, but I'm not exactly clear on how they have their stuff set up, and their web set gets helpful and tries to help configure my mail — I don't immediately see a single web page with the mail server set-up details.  The BT email client set-up starts here.

  • Premiere Pro CC crashes after 25% rendering, has particular issues with directional blur effect

    I've only been using Premiere for a single project as my university specified we should use it, I'm not used to such a complex programme, or even video editing to be honest, so bear with me. (I'm following that guide on what information to add to questions on this forum)
    I'm using Premiere Pro CC with the 7.2.1 update, that's the most recent update
    I'm on Windows 7
    Version
    6.1.7601 Service Pack 1 Build 7601
    I'm not actually sure what a codec is but my source material is just one .jpg image, I put effects on it in one project then imported it to another project though
    Here's the problem message
      Problem Event Name:
    APPCRASH
      Application Name:
    Adobe Premiere Pro.exe
      Application Version:
    7.2.1.4
      Application Timestamp:
    52aed7f3
      Fault Module Name:
    OpenCL.dll
      Fault Module Version:
    1.2.11.0
      Fault Module Timestamp:
    52a242a8
      Exception Code:
    c0000005
      Exception Offset:
    0000000000002b89
      OS Version:
    6.1.7601.2.1.0.768.3
      Locale ID:
    2057
      Additional Information 1:
    5273
      Additional Information 2:
    52739eafd9c00fbd41b66266050284b2
      Additional Information 3:
    750c
      Additional Information 4:
    750c2263c4fb3446348e7454d5714618
    Computer:
    Processor
    Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz, 3501 Mhz, 4 Core(s), 8 Logical Processor(s)
    I THINK I have a AMD Radeon HD 7870 which isn't supposed to be supported for GPU acceleration
    Premier crashes when it gets to the point in my sequence where I used the directional blur effect (I'm pretty sure that's when it crashes anyway), it also can't export I assume because it can't render
    this is like 4 seonds of video as well
    I reaaaaally need it to work soon because I have university deadlines to meet and I need Premiere Pro to finish my work
    Hope someone can give some advice

    I tried it without the directional blur effect and it rendered fine, for some reason it just can't deal with directional blur...

  • Serial table scan with direct path read compared to db file scattered read

    Hi,
    The environment
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit
    8K block size
    db_file_multiblock_read_count is 128
    show sga
    Total System Global Area 1.6702E+10 bytes
    Fixed Size                  2219952 bytes
    Variable Size            7918846032 bytes
    Database Buffers         8724152320 bytes
    Redo Buffers               57090048 bytes
    16GB of SGA with 8GB of db buffer cache.
    -- database is built on Solid State Disks
    -- SQL trace and wait events
    DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true )
    -- The underlying table is called tdash. It has 1.7 Million rows based on data in all_objects. NO index
    TABLE_NAME                             Rows Table Size/MB      Used/MB    Free/MB
    TDASH                             1,729,204        15,242       15,186         56
    TABLE_NAME                     Allocated blocks Empty blocks Average space/KB Free list blocks
    TDASH                                 1,943,823        7,153              805                0
    Objectives
    To show that when serial scans are performed on database built on Solid State Disks (SSD) compared to Magnetic disks (HDD), the performance gain is far less compared to random reads with index scans on SSD compared to HDD
    Approach
    We want to read the first 100 rows of tdash table randomly into buffer, taking account of wait events and wait times generated. The idea is that on SSD the wait times will be better compared to HDD but not that much given the serial nature of table scans.
    The code used
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_with_tdash_ssdtester_noindex';
    DECLARE
            type array is table of tdash%ROWTYPE index by binary_integer;
            l_data array;
            l_rec tdash%rowtype;
    BEGIN
            SELECT
                    a.*
                    ,RPAD('*',4000,'*') AS PADDING1
                    ,RPAD('*',4000,'*') AS PADDING2
            BULK COLLECT INTO
            l_data
            FROM ALL_OBJECTS a;
            DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
            FOR rs IN 1 .. 100
            LOOP
                    BEGIN
                            SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
                    EXCEPTION
                      WHEN NO_DATA_FOUND THEN NULL;
                    END;
            END LOOP;
    END;
    /Server is rebooted prior to any tests
    Whern run as default, the optimizer (although some attribute this to the execution engine) chooses direct path read into PGA in preference to db file scattered read.
    With this choice it takes 6,520 seconds to complete the query. The results are shown below
    SQL ID: 78kxqdhk1ubvq
    Plan Hash: 1148949653
    SELECT *
    FROM
    TDASH WHERE OBJECT_ID = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          2         47          0           0
    Execute    100      0.00       0.00          1         51          0           0
    Fetch      100     10.88    6519.89  194142802  194831012          0         100
    total      201     10.90    6519.90  194142805  194831110          0         100
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS FULL TDASH (cr=1948310 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   TABLE ACCESS   MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      Disk file operations I/O                        3        0.00          0.00
      db file sequential read                         2        0.00          0.00
      direct path read                          1517504        0.05       6199.93
      asynch descriptor resize                      196        0.00          0.00
    DECLARE
            type array is table of tdash%ROWTYPE index by binary_integer;
            l_data array;
            l_rec tdash%rowtype;
    BEGIN
            SELECT
                    a.*
                    ,RPAD('*',4000,'*') AS PADDING1
                    ,RPAD('*',4000,'*') AS PADDING2
            BULK COLLECT INTO
            l_data
            FROM ALL_OBJECTS a;
            DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
            FOR rs IN 1 .. 100
            LOOP
                    BEGIN
                            SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
                    EXCEPTION
                      WHEN NO_DATA_FOUND THEN NULL;
                    END;
            END LOOP;
    END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      3.84       4.03        320      48666          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      3.84       4.03        320      48666          0           1
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID: 9babjv8yq8ru3
    Plan Hash: 0
    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          0          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      2      3.84       4.03        320      48666          0           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.84       4.03        320      48666          0           2
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      log file sync                                   1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        9      0.01       0.00          2         47          0           0
    Execute    129      0.01       0.00          1         52          2           1
    Fetch      140     10.88    6519.89  194142805  194831110          0         130
    total      278     10.91    6519.91  194142808  194831209          2         131
    Misses in library cache during parse: 9
    Misses in library cache during execute: 8
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         5        0.00          0.00
      Disk file operations I/O                        3        0.00          0.00
      direct path read                          1517504        0.05       6199.93
      asynch descriptor resize                      196        0.00          0.00
      102  user  SQL statements in session.
       29  internal SQL statements in session.
      131  SQL statements in session.
        1  statement EXPLAINed in this session.
    Trace file: mydb_ora_16394_test_with_tdash_ssdtester_noindex.trc
    Trace file compatibility: 11.1.0.7
    Sort options: default
           1  session in tracefile.
         102  user  SQL statements in trace file.
          29  internal SQL statements in trace file.
         131  SQL statements in trace file.
          11  unique SQL statements in trace file.
           1  SQL statements EXPLAINed using schema:
               ssdtester.plan_table
                 Schema was specified.
                 Table was created.
                 Table was dropped.
    1531657  lines in trace file.
        6520  elapsed seconds in trace file.I then force the query not to use direct path read by invoking
    ALTER SESSION SET EVENTS '10949 trace name context forever, level 1'  -- No Direct path read  ;In this case the optimizer uses db file scattered read predominantly and the query takes 4,299 seconds to finish which is around 34% faster than using direct path read (default).
    The report is shown below
    SQL ID: 78kxqdhk1ubvq
    Plan Hash: 1148949653
    SELECT *
    FROM
    TDASH WHERE OBJECT_ID = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          2         47          0           0
    Execute    100      0.00       0.00          2         51          0           0
    Fetch      100    143.44    4298.87  110348670  194490912          0         100
    total      201    143.45    4298.88  110348674  194491010          0         100
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS FULL TDASH (cr=1944909 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   TABLE ACCESS   MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      Disk file operations I/O                        3        0.00          0.00
      db file sequential read                    129759        0.01         17.50
      db file scattered read                    1218651        0.05       3770.02
      latch: object queue header operation            2        0.00          0.00
    DECLARE
            type array is table of tdash%ROWTYPE index by binary_integer;
            l_data array;
            l_rec tdash%rowtype;
    BEGIN
            SELECT
                    a.*
                    ,RPAD('*',4000,'*') AS PADDING1
                    ,RPAD('*',4000,'*') AS PADDING2
            BULK COLLECT INTO
            l_data
            FROM ALL_OBJECTS a;
            DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
            FOR rs IN 1 .. 100
            LOOP
                    BEGIN
                            SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
                    EXCEPTION
                      WHEN NO_DATA_FOUND THEN NULL;
                    END;
            END LOOP;
    END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      1      3.92       4.07        319      48625          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        1      3.92       4.07        319      48625          0           1
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID: 9babjv8yq8ru3
    Plan Hash: 0
    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.00          0          0          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 96  (SSDTESTER)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      2      3.92       4.07        319      48625          0           2
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.92       4.07        319      48625          0           2
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      log file sync                                   1        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        9      0.01       0.00          2         47          0           0
    Execute    129      0.00       0.00          2         52          2           1
    Fetch      140    143.44    4298.87  110348674  194491010          0         130
    total      278    143.46    4298.88  110348678  194491109          2         131
    Misses in library cache during parse: 9
    Misses in library cache during execute: 8
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                    129763        0.01         17.50
      Disk file operations I/O                        3        0.00          0.00
      db file scattered read                    1218651        0.05       3770.02
      latch: object queue header operation            2        0.00          0.00
      102  user  SQL statements in session.
       29  internal SQL statements in session.
      131  SQL statements in session.
        1  statement EXPLAINed in this session.
    Trace file: mydb_ora_26796_test_with_tdash_ssdtester_noindex_NDPR.trc
    Trace file compatibility: 11.1.0.7
    Sort options: default
           1  session in tracefile.
         102  user  SQL statements in trace file.
          29  internal SQL statements in trace file.
         131  SQL statements in trace file.
          11  unique SQL statements in trace file.
           1  SQL statements EXPLAINed using schema:
               ssdtester.plan_table
                 Schema was specified.
                 Table was created.
                 Table was dropped.
    1357958  lines in trace file.
        4299  elapsed seconds in trace file.I note that there are 1,517,504 waits with direct path read with total time of nearly 6,200 seconds. In comparison with no direct path read, there are 1,218,651 db file scattered read waits with total wait time of 3,770 seconds. My understanding is that direct path read can use single or multi-block read into the PGA. However db file scattered reads do multi-block read into multiple discontinuous SGA buffers. So it is possible given the higher number of direct path waits that the optimizer cannot do multi-block reads (contigious buffers within PGA) and hence has to revert to single blocks reads which results in more calls and more waits?.
    Appreciate any advise and apologies for being long winded.
    Thanks,
    Mich

    Hi Charles,
    I am doing your tests for t1 table using my server.
    Just to clarify my environment is:
    I did the whole of this test on my server. My server has I7-980 HEX core processor with 24GB of RAM and 1 TB of HDD SATA II for test/scratch backup and archive. The operating system is RHES 5.2 64-bit installed on a 120GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive.
    Oracle version installed was 11g Enterprise Edition Release 11.2.0.1.0 -64bit. The binaries were created on HDD. Oracle itself was configured with 16GB of SGA, of which 7.5GB was allocated to Variable Size and 8GB to Database Buffers.
    For Oracle tablespaces including SYS, SYSTEM, SYSAUX, TEMPORARY, UNDO and redo logs, I used file systems on 240GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive. With 4K Random Read at 53,500 IOPS and 4K Random Write at 56,000 IOPS (manufacturer’s figures), this drive is probably one of the fastest commodity SSDs using NAND flash memory with Multi-Level Cell (MLC). Now my T1 table created as per your script and has the following rows and blocks (8k block size)
    SELECT
      NUM_ROWS,
      BLOCKS
    FROM
      USER_TABLES
    WHERE
      TABLE_NAME='T1';
      NUM_ROWS     BLOCKS
      12000000     178952which is pretty identical to yours.
    Then I run the query as brelow
    set timing on
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_bed_T1';
    ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
    SELECT
            COUNT(*)
    FROM
            T1
    WHERE
            RN=1;
    which gives
      COUNT(*)
         60000
    Elapsed: 00:00:05.29
    tkprof output shows
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.02       5.28     178292     178299          0           1
    total        4      0.02       5.28     178292     178299          0           1
    Compared to yours:
    Fetch        2      0.60       4.10     178493     178498          0           1
    It appears to me that my CPU utilisation is by order of magnitude better but my elapsed time is worse!
    Now the way I see it elapsed time = CPU time + wait time. Further down I have
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=178299 pr=178292 pw=0 time=0 us)
      60000   TABLE ACCESS FULL T1 (cr=178299 pr=178292 pw=0 time=42216 us cost=48697 size=240000 card=60000)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
      60000    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'T1' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       3        0.00          0.00
      SQL*Net message from client                     3        0.00          0.00
      Disk file operations I/O                        3        0.00          0.00
      direct path read                             1405        0.00          4.68
    Your direct path reads are
      direct path read                             1404        0.01          3.40Which indicates to me you have faster disks compared to mine, whereas it sounds like my CPU is faster than yours.
    With db file scattered read I get
    Elapsed: 00:00:06.95
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      1.22       6.93     178293     178315          0           1
    total        4      1.22       6.94     178293     178315          0           1
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=178315 pr=178293 pw=0 time=0 us)
      60000   TABLE ACCESS FULL T1 (cr=178315 pr=178293 pw=0 time=41832 us cost=48697 size=240000 card=60000)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
      60000    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'T1' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      Disk file operations I/O                        3        0.00          0.00
      db file sequential read                         1        0.00          0.00
      db file scattered read                       1414        0.00          5.36
      SQL*Net message from client                     2        0.00          0.00
    compared to your
      db file scattered read                       1415        0.00          4.16On the face of it with this test mine shows 21% improvement with direct path read compared to db scattered file read. So now I can go back to re-visit my original test results:
    First default with direct path read
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          2         47          0           0
    Execute    100      0.00       0.00          1         51          0           0
    Fetch      100     10.88    6519.89  194142802  194831012          0         100
    total      201     10.90    6519.90  194142805  194831110          0         100
    CPU ~ 11 sec, elapsed ~ 6520 sec
    wait stats
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      direct path read                          1517504        0.05       6199.93
    roughly 0.004 sec for each I/ONow with db scattered file read I get
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          2         47          0           0
    Execute    100      0.00       0.00          2         51          0           0
    Fetch      100    143.44    4298.87  110348670  194490912          0         100
    total      201    143.45    4298.88  110348674  194491010          0         100
    CPU ~ 143 sec, elapsed ~ 4299 sec
    and waits:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                    129759        0.01         17.50
      db file scattered read                    1218651        0.05       3770.02
    roughly 17.5/129759 = .00013 sec for single block I/O and  3770.02/1218651 = .0030 for multi-block I/ONow my theory is that the improvements comes from the large buffer cache (8320MB) inducing it to do some read aheads (async pre-fetch). Read aheads are like quasi logical I/Os and they will be cheaper compared to physical I/O. When there is large buffer cache and read aheads can be done then using buffer cache is a better choice than PGA?
    Regards,
    Mich

Maybe you are looking for