Direct Input Block Size
Hello,
we are using the Direct Input technology. Block Size is 100 (minimum value).
Sometimes there are 2 identical records in the input file. We do not want the second of the 2 records to be saved (corresponding behaviour as in dialogue mode). But when executing transaction KCLJ, the Direct Input doesn't notice this when 2 identical records are in the same block because the first one isn't saved on the database yet.
How can we avoid this behaviour ? Block Size 100 can't be decreased further.
Greetings
Team LSV-GP
Hi, check the link
Re: Reduce the selection screen block size
Regards,
jaya
Similar Messages
-
Hi,
Is there any standard direct input programs for material master upload.
rgds
p.kpHi
There is a standard Programs for Different Modules.
But you need to populate the Data, and send the data to the Direct Input Program in a specific format , format in the sense you need to populate the nodata for fields which you don't want to update.
and submit to the std direct input program.
check the code below..
*& Report Z__DIRECT__MAT___CREAT *
REPORT Z__DIRECT__MAT___CREAT .
INCLUDE Z_INCLUDE_MAT_CREAT.
* SELECTION SCREEN
SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-002.
PARAMETERS:P_FILE LIKE RLGRAP-FILENAME OBLIGATORY.
SELECTION-SCREEN END OF BLOCK B1 .
* AT SELECTION SCREEN
AT SELECTION-SCREEN ON VALUE-REQUEST FOR P_FILE.
CALL FUNCTION 'F4_FILENAME'
EXPORTING
PROGRAM_NAME = SYST-CPROG
DYNPRO_NUMBER = SYST-DYNNR
FIELD_NAME = 'P_FILE'
IMPORTING
FILE_NAME = P_FILE.
*START-OF-SELECTION
START-OF-SELECTION.
*Perform to upload the data from Presentation Server
V_FILE = P_FILE.
PERFORM UPLOAD_DATA.
*Transfer the Data to the structure BGR00 BMMH1 BMM00
PERFORM CONVERT_0000. " BGR00
PERFORM POPULATE_DATA CHANGING BMM00.
PERFORM POPULATE_DATA CHANGING BMMH1.
**Looping the flat file data and updating the structures BMM00 & BMMH1
LOOP AT MATERIAL_MASTER.
*Writing the Data to the Application Server in a proper Format
OPEN DATASET C_ZTEST FOR APPENDING IN TEXT MODE ENCODING
DEFAULT.
*Transfer the Data to the structure BMM00
PERFORM CONVERT_0002. " BMM00
*Transfer the Data to the structure BMMh1
PERFORM CONVERT_0003. " BMMH1
**Closing the dataset after transfering the data
CLOSE DATASET C_ZTEST.
ENDLOOP. "Endloop of MATERIAL_MASTER
*END-OF-SELECTION
END-OF-SELECTION.
**Calling the Direct Input Program to Create the material
SUBMIT RMDATIND WITH %%%_R_P = C_X
WITH %%%_PHY = C_ZTEST
WITH SPERR = C_N.
include for the above..
*& Include Z_INCLUDE_MAT_CREAT *
**Tables Used To Create the Material
TABLES:
BGR00,
BMM00,
BMMH1.
DATA:C_ZTEST(60) type c,
C_X TYPE C,
C_N TYPE C,
V_file type string.
C_ZTEST = 'Ztest.lsmw.conv'(001).
C_X = 'X'(003).
C_N = 'N'(004).
**FILED SYMBOLS
FIELD-SYMBOLS: <F> .
**Structure to Hold the Flat File
data:
begin of LSMW_MATERIAL_MASTER,
MATNR(018) type C, "Material number
MTART(004) type C, "Material type
MBRSH(001) type C, "Industry sector
WERKS(004) type C, "Plant
MAKTX(040) type C, "Material description
DISMM(002) type C, "Extra Field Added In the Program as it is required
MEINS(003) type C, "Base unit of measure
MATKL(009) type C, "Material group
SPART(002) type C, "Division
LABOR(003) type C, "Lab/office
PRDHA(018) type C, "Product hierarchy
MSTAE(002) type C, "X-plant matl status
MTPOS_MARA(004) type C, "Gen item cat group
BRGEW(017) type C, "Gross weight
GEWEI(003) type C, "Weight unit
NTGEW(017) type C, "Net weight
GROES(032) type C, "Size/Dimensions
MAGRV(004) type C, "Matl grp pack matls
BISMT(018) type C, "Old material number
WRKST(048) type C, "Basic material
PROFL(003) type C, "DG indicator profile
KZUMW(001) type C, "Environmentally rlvt
BSTME(003) type C, "Order unit
VABME(001) type C,
EKGRP(003) type C, "Purchasing group
XCHPF(001) type C, "Batch management
EKWSL(004) type C, "Purchasing key value
WEBAZ(003) type C, "GR processing time
MFRPN(040) type C, "Manufacturer part number
MFRNR(010) type C, "Manufacturer number
VPRSV(001) type C, "Price control indicator
STPRS(015) type C, "Standard price
BWPRH(014) type C, "Commercial price1
end of LSMW_MATERIAL_MASTER.
**InternalTable to HOld the Flat File Data
DATA:
BEGIN OF MATERIAL_MASTER OCCURS 0.
INCLUDE STRUCTURE LSMW_MATERIAL_MASTER.
DATA:
END OF MATERIAL_MASTER.
*& Form upload_data From Presentation Server
FORM UPLOAD_DATA.
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
FILENAME = V_FILE
FILETYPE = 'ASC'
HAS_FIELD_SEPARATOR = 'X'
* HEADER_LENGTH = 0
* READ_BY_LINE = 'X'
* DAT_MODE = ' '
* CODEPAGE = ' '
* IGNORE_CERR = ABAP_TRUE
* REPLACEMENT = '#'
* IMPORTING
* FILELENGTH =
* HEADER =
TABLES
DATA_TAB = MATERIAL_MASTER
EXCEPTIONS
FILE_OPEN_ERROR = 1
FILE_READ_ERROR = 2
NO_BATCH = 3
GUI_REFUSE_FILETRANSFER = 4
INVALID_TYPE = 5
NO_AUTHORITY = 6
UNKNOWN_ERROR = 7
BAD_DATA_FORMAT = 8
HEADER_NOT_ALLOWED = 9
SEPARATOR_NOT_ALLOWED = 10
HEADER_TOO_LONG = 11
UNKNOWN_DP_ERROR = 12
ACCESS_DENIED = 13
DP_OUT_OF_MEMORY = 14
DISK_FULL = 15
DP_TIMEOUT = 16
OTHERS = 17
IF SY-SUBRC = 0.
DELETE MATERIAL_MASTER INDEX 1.
ENDIF.
ENDFORM. "upload_data
*& Updating the BGR00 Structure
FORM CONVERT_0000. " BGR00
**Opening the Data Set to write the data to Application Server
OPEN DATASET C_ZTEST FOR OUTPUT IN TEXT MODE ENCODING DEFAULT
MOVE: '0' TO BGR00-STYPE,
'ZTEST' TO BGR00-GROUP,
SY-MANDT TO BGR00-MANDT,
SY-UNAME TO BGR00-USNAM,
'X' TO BGR00-XKEEP,
'/' TO BGR00-NODATA.
*Transefering the Data To the Application Server File
TRANSFER BGR00 TO C_ZTEST.
*Closing the Dataset after Transfer
CLOSE DATASET C_ZTEST.
ENDFORM. "convert_0001
*& Updating BMM00 Structure
FORM CONVERT_0002. " BMM00
* --- BMM00-STYPE
BMM00-STYPE = '1'.
* --- BMM00-TCODE
BMM00-TCODE = 'MM01'.
* --- BMM00-MATNR
IF NOT MATERIAL_MASTER-MATNR IS INITIAL.
BMM00-MATNR = MATERIAL_MASTER-MATNR.
ELSE.
BMM00-MATNR = '/'.
ENDIF.
* --- BMM00-MBRSH
IF NOT MATERIAL_MASTER-MBRSH IS INITIAL.
BMM00-MBRSH = MATERIAL_MASTER-MBRSH.
ELSE.
BMM00-MBRSH = '/'.
ENDIF.
* --- BMM00-MTART
IF NOT MATERIAL_MASTER-MTART IS INITIAL.
BMM00-MTART = MATERIAL_MASTER-MTART.
ELSE.
BMM00-MTART = '/'.
ENDIF.
* --- BMM00-WERKS
IF NOT MATERIAL_MASTER-WERKS IS INITIAL.
BMM00-WERKS = MATERIAL_MASTER-WERKS.
ELSE.
BMM00-WERKS = '/'.
ENDIF.
BMM00-XEIB1 = 'X'. " BMM00-xeib1 = '/'.
BMM00-XEIE1 = 'X'. " BMM00-xeie1 = '/'.
BMM00-XEIK1 = 'X'. " BMM00-xeik1 = '/'.
**Transfer the data to the Application Server File
TRANSFER BMM00 TO C_ZTEST.
ENDFORM. "convert_0002
*& UPdating BMMH1 Structure
FORM CONVERT_0003. " BMMH1
* --- BMMH1-STYPE
BMMH1-STYPE = '2'.
* --- BMMH1-MEINS
IF NOT MATERIAL_MASTER-MEINS IS INITIAL.
BMMH1-MEINS = MATERIAL_MASTER-MEINS.
ELSE.
BMMH1-MEINS = '/'.
ENDIF.
* --- BMMH1-MAKTX
IF NOT MATERIAL_MASTER-MAKTX IS INITIAL.
BMMH1-MAKTX = MATERIAL_MASTER-MAKTX.
ELSE.
BMMH1-MAKTX = '/'.
ENDIF.
* --- BMMH1-MATKL
IF NOT MATERIAL_MASTER-MATKL IS INITIAL.
BMMH1-MATKL = MATERIAL_MASTER-MATKL.
ELSE.
BMMH1-MATKL = '/'.
ENDIF.
* --- BMMH1-BISMT
IF NOT MATERIAL_MASTER-BISMT IS INITIAL.
BMMH1-BISMT = MATERIAL_MASTER-BISMT.
ELSE.
BMMH1-BISMT = '/'.
ENDIF.
* --- BMMH1-LABOR
IF NOT MATERIAL_MASTER-LABOR IS INITIAL.
BMMH1-LABOR = MATERIAL_MASTER-LABOR.
ELSE.
BMMH1-LABOR = '/'.
ENDIF.
* --- BMMH1-WRKST
IF NOT MATERIAL_MASTER-WRKST IS INITIAL.
BMMH1-WRKST = MATERIAL_MASTER-WRKST.
ELSE.
BMMH1-WRKST = '/'.
ENDIF.
* --- BMMH1-BRGEW
IF NOT MATERIAL_MASTER-BRGEW IS INITIAL.
BMMH1-BRGEW = MATERIAL_MASTER-BRGEW.
ELSE.
BMMH1-BRGEW = '/'.
ENDIF.
* --- BMMH1-NTGEW
IF NOT MATERIAL_MASTER-NTGEW IS INITIAL.
BMMH1-NTGEW = MATERIAL_MASTER-NTGEW.
ELSE.
BMMH1-NTGEW = '/'.
ENDIF.
* --- BMMH1-GEWEI
IF NOT MATERIAL_MASTER-GEWEI IS INITIAL.
BMMH1-GEWEI = MATERIAL_MASTER-GEWEI.
ELSE.
BMMH1-GEWEI = '/'.
ENDIF.
* --- BMMH1-GROES
IF NOT MATERIAL_MASTER-GROES IS INITIAL.
BMMH1-GROES = MATERIAL_MASTER-GROES.
ELSE.
BMMH1-GROES = '/'.
ENDIF.
* --- BMMH1-SPART
IF NOT MATERIAL_MASTER-SPART IS INITIAL.
BMMH1-SPART = MATERIAL_MASTER-SPART.
ELSE.
BMMH1-SPART = '/'.
ENDIF.
* --- BMMH1-BSTME
IF NOT MATERIAL_MASTER-BSTME IS INITIAL.
BMMH1-BSTME = MATERIAL_MASTER-BSTME.
ELSE.
BMMH1-BSTME = '/'.
ENDIF.
* --- BMMH1-EKWSL
IF NOT MATERIAL_MASTER-EKWSL IS INITIAL.
BMMH1-EKWSL = MATERIAL_MASTER-EKWSL.
ELSE.
BMMH1-EKWSL = '/'.
ENDIF.
* --- BMMH1-EKGRP
IF NOT MATERIAL_MASTER-EKGRP IS INITIAL.
BMMH1-EKGRP = MATERIAL_MASTER-EKGRP.
ELSE.
BMMH1-EKGRP = '/'.
ENDIF.
* --- BMMH1-XCHPF
IF NOT MATERIAL_MASTER-XCHPF IS INITIAL.
BMMH1-XCHPF = MATERIAL_MASTER-XCHPF.
ELSE.
BMMH1-XCHPF = '/'.
ENDIF.
* --- BMMH1-WEBAZ
IF NOT MATERIAL_MASTER-WEBAZ IS INITIAL.
BMMH1-WEBAZ = MATERIAL_MASTER-WEBAZ.
ELSE.
BMMH1-WEBAZ = '/'.
ENDIF.
IF NOT MATERIAL_MASTER-DISMM IS INITIAL.
BMMH1-DISMM = MATERIAL_MASTER-DISMM.
ELSE.
BMMH1-DISMM = '/'.
ENDIF.
* --- BMMH1-VPRSV
IF NOT MATERIAL_MASTER-VPRSV IS INITIAL.
BMMH1-VPRSV = MATERIAL_MASTER-VPRSV.
ELSE.
BMMH1-VPRSV = '/'.
ENDIF.
BMMH1-VERPR = '/'.
* --- BMMH1-STPRS
IF NOT MATERIAL_MASTER-STPRS IS INITIAL.
BMMH1-STPRS = MATERIAL_MASTER-STPRS.
ELSE.
BMMH1-STPRS = '/'.
ENDIF.
* --- BMMH1-BWPRH
IF NOT MATERIAL_MASTER-BWPRH IS INITIAL.
BMMH1-BWPRH = MATERIAL_MASTER-BWPRH.
ELSE.
BMMH1-BWPRH = '/'.
ENDIF.
* --- BMMH1-PRDHA
IF NOT MATERIAL_MASTER-PRDHA IS INITIAL.
BMMH1-PRDHA = MATERIAL_MASTER-PRDHA.
ELSE.
BMMH1-PRDHA = '/'.
ENDIF.
* --- BMMH1-VABME
IF NOT MATERIAL_MASTER-VABME IS INITIAL.
BMMH1-VABME = MATERIAL_MASTER-VABME.
ELSE.
BMMH1-VABME = '/'.
ENDIF.
* --- BMMH1-MAGRV
IF NOT MATERIAL_MASTER-MAGRV IS INITIAL.
BMMH1-MAGRV = MATERIAL_MASTER-MAGRV.
ELSE.
BMMH1-MAGRV = '/'.
ENDIF.
* --- BMMH1-KZUMW
IF NOT MATERIAL_MASTER-KZUMW IS INITIAL.
BMMH1-KZUMW = MATERIAL_MASTER-KZUMW.
ELSE.
BMMH1-KZUMW = '/'.
ENDIF.
* --- BMMH1-MFRNR
IF NOT MATERIAL_MASTER-MFRNR IS INITIAL.
BMMH1-MFRNR = MATERIAL_MASTER-MFRNR.
ELSE.
BMMH1-MFRNR = '/'.
ENDIF.
* --- BMMH1-MFRPN
IF NOT MATERIAL_MASTER-MFRPN IS INITIAL.
BMMH1-MFRPN = MATERIAL_MASTER-MFRPN.
ELSE.
BMMH1-MFRPN = '/'.
ENDIF.
BMMH1-MPROF = '/'.
* --- BMMH1-MSTAE
IF NOT MATERIAL_MASTER-MSTAE IS INITIAL.
BMMH1-MSTAE = MATERIAL_MASTER-MSTAE.
ELSE.
BMMH1-MSTAE = '/'.
ENDIF.
* --- BMMH1-PROFL
IF NOT MATERIAL_MASTER-PROFL IS INITIAL.
BMMH1-PROFL = MATERIAL_MASTER-PROFL.
ELSE.
BMMH1-PROFL = '/'.
ENDIF.
* --- BMMH1-MTPOS_MARA
IF NOT MATERIAL_MASTER-MTPOS_MARA IS INITIAL.
BMMH1-MTPOS_MARA = MATERIAL_MASTER-MTPOS_MARA.
ELSE.
BMMH1-MTPOS_MARA = '/'.
ENDIF.
**Transfer the Data to Application Server File
TRANSFER BMMH1 TO C_ZTEST.
ENDFORM. "convert_0003
*& Form POPULATE_DATA
* text
* <--P_BLF text
FORM POPULATE_DATA CHANGING P_BLF.
DATA: L_NUM TYPE I.
DO.
L_NUM = L_NUM + 1.
ASSIGN COMPONENT L_NUM OF STRUCTURE P_BLF TO <F>.
IF SY-SUBRC <> 0.
EXIT.
ENDIF.
MOVE BGR00-NODATA TO <F>.
ENDDO.
ENDFORM. " POPULATE_DATA -
Problem with direct input program while uploading data into database
TABLES: BGR00, " Mappensatz
BMM00, " MM01/MM02 BTCI-Kopfdaten
BMMH1, " MM01/MM02 Hauptdaten
BMMH2, " Länderdaten (Steuern)
BMMH3, " Prognosewerte
BMMH4, " Verbrauchswerte
BMMH5, " Kurztexte
BMMH6, " Mengeneinheiten
BMMH7, " Langtexte
BMMH8. " Referentielle EAN's
Satztypen
DATA: MAPPENSATZ LIKE BMM00-STYPE VALUE '0',
KOPFSATZ LIKE BMM00-STYPE VALUE '1',
HAUPTSATZ LIKE BMM00-STYPE VALUE '2',
KUN_SATZ LIKE BMM00-STYPE VALUE 'Z',
LANDSATZ LIKE BMM00-STYPE VALUE '3',
PROGSATZ LIKE BMM00-STYPE VALUE '4',
VERBSATZ LIKE BMM00-STYPE VALUE '5',
KTEXTSATZ LIKE BMM00-STYPE VALUE '6',
MESATZ LIKE BMM00-STYPE VALUE '7',
TEXTSATZ LIKE BMM00-STYPE VALUE '8',
EANSATZ LIKE BMM00-STYPE VALUE '9'.
Common Data Bereich fuer die extern aufgerufenen Routinen
Initialstrukturen
DATA: BEGIN OF COMMON PART RMMMBIMY.
DATA: BEGIN OF I_BMM00.
INCLUDE STRUCTURE BMM00. " Kopfdaten
DATA: END OF I_BMM00.
DATA: BEGIN OF I_BMMH1.
INCLUDE STRUCTURE BMMH1. " Haupdaten
DATA: END OF I_BMMH1.
DATA: BEGIN OF I_BMMH2.
INCLUDE STRUCTURE BMMH2. " Länderdaten
DATA: END OF I_BMMH2.
DATA: BEGIN OF I_BMMH3.
INCLUDE STRUCTURE BMMH3. " Prognosewerte
DATA: END OF I_BMMH3.
DATA: BEGIN OF I_BMMH4.
INCLUDE STRUCTURE BMMH4. " Verbrauchswerte
DATA: END OF I_BMMH4.
DATA: BEGIN OF I_BMMH5.
INCLUDE STRUCTURE BMMH5. " Kurztexte
DATA: END OF I_BMMH5.
DATA: BEGIN OF I_BMMH6.
INCLUDE STRUCTURE BMMH6. " Mengeneinheiten
DATA: END OF I_BMMH6.
DATA: BEGIN OF I_BMMH7.
INCLUDE STRUCTURE BMMH7. " Textzeilen
DATA: END OF I_BMMH7.
DATA: BEGIN OF I_BMMH8.
INCLUDE STRUCTURE BMMH8. " Referentielle EAN's
DATA: END OF I_BMMH8.
DATA: END OF COMMON PART.
DATA: WA LIKE TEDATA-DATA.
Einzelfelder
DATA: GROUP_COUNT(6) TYPE C, " Anzahl Mappen
TRANS_COUNT(6) TYPE C, " alte Definition für rmmmbim0
SATZ_COUNT LIKE MUEB_REST-TRANC, " Trans.zähler neu
H_IND_COUNT LIKE MUEB_REST-D_IND, " Index welches Feld zurücks.
SATZ2_COUNT(6) TYPE C. " Anz. Sätze je Trans. ohne Kopfsatz
DATA: XEOF(1) TYPE C, " X=End of File erreicht
XHAUPTSATZ_EXIST TYPE C, " X=Hauptsatz zum Kopf exi.
NODATA(1) TYPE C. " kein BI für dieses Feld
mk/15.08.94:
DATA: GROUP_OPEN(1) TYPE C. " X=Mappe schon geöffnet
*eject
Konstanten
DATA: C_NODATA(1) TYPE C VALUE '/'. " Default für NODATA
DATA: MATNR_ERW LIKE MARA-MATNR VALUE '0 '.
DATA: MATNR_ERW_INT LIKE MARA-MATNR. "internal sight of '0 '
DATA: MATNR_LAST LIKE MARA-MATNR. "Material number
mk/11.08.94 2.1H:
If this flag is initial, the database updates will be done directly
during background maintenance instead of using a separate update
task. (no usage of this flag in dialogue mode!)
DATA: DBUPDATE_VB(1) VALUE ' '. "note 306628
data: matsync type mat_sync. "wk/99a no update in dialog if called
***INCLUDE ZMUSD070.
TABLES: MARA, "Material Master: General Data
MARC, "Material Master: C Segment
MARD, "Material Master: St Loc/Batch
MBEW, "Material Valuation
MVKE, "Material Master: Sales Data
MLGN, "Material Data per Whse Number
MLAN, "Tax Classification: Material
T001W, "Plants/Branches
TBICU.
DATA: BEGIN OF VALUTAB OCCURS 0.
INCLUDE STRUCTURE RSPARAMS.
DATA: END OF VALUTAB.
DATA: BEGIN OF VARTECH.
INCLUDE STRUCTURE VARID.
DATA: END OF VARTECH.
DATA: PARMS LIKE ZXXDCONV.
DATA: REC_COUNT TYPE I,
REC_COUNT_BAD TYPE I,
ZJOBID LIKE TBIZU-JOBID,
ZJOBCOUNT LIKE TBIZU-JOBCOUNT,
ZMATNR LIKE MARA-MATNR,
ZTEXT(80) TYPE C.
CONSTANTS: LIT_ZERO(18) TYPE C VALUE '000000000000000000',
LIT_CHAR TYPE C VALUE '_',
LIT_CREATE LIKE BMM00-TCODE VALUE 'MM01',
LIT_CHANGE LIKE BMM00-TCODE VALUE 'MM02',
LIT_CHECK(1) TYPE C VALUE 'X'.
DATA: BEGIN OF INP_DATA OCCURS 0,
MATNR(18) TYPE C, " Material code
UMREN(6) TYPE C, " Denominator
MEINH(3) TYPE C, " Alternate UOM
UMREZ(6) TYPE C, " Numerator
END OF INP_DATA.
*eject
SELECTION-SCREEN BEGIN OF BLOCK INOUT WITH FRAME TITLE TEXT-001.
SELECTION-SCREEN BEGIN OF LINE.
SELECTION-SCREEN COMMENT (13) TEXT-004.
PARAMETERS: P_PC RADIOBUTTON GROUP SRC DEFAULT 'X'.
SELECTION-SCREEN COMMENT (6) TEXT-005.
PARAMETERS: P_UNIX RADIOBUTTON GROUP SRC.
SELECTION-SCREEN COMMENT (6) TEXT-006.
PARAMETERS: P_DS_TYP LIKE ZXXDCONV-DS_TYP
DEFAULT 'ASC'.
SELECTION-SCREEN END OF LINE.
*SELECT-OPTIONS: S_PATH FOR PARMS-PATH
NO INTERVALS
LOWER CASE.
PARAMETERS: P_PATH TYPE RLGRAP-FILENAME.
PARAMETERS: P_HDRLIN LIKE ZXXDCONV-HDR_LINES
DEFAULT 0,
P_JOBNAM LIKE TBICU_S-JOBNAME
MEMORY ID BM1,
P_DI_EXE AS CHECKBOX
DEFAULT LIT_CHECK,
P_MAPPE LIKE BGR00-GROUP
DEFAULT 'MRP_UOM_LOAD'
NO-DISPLAY.
SELECTION-SCREEN END OF BLOCK INOUT.
*eject
AT SELECTION-SCREEN ON VALUE-REQUEST FOR P_PATH.
CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
EXPORTING
PROGRAM_NAME = SYST-REPID
DYNPRO_NUMBER = SYST-DYNNR
FIELD_NAME = 'P_PATH'
CHANGING
FILE_NAME = S_PATH-LOW
FILE_NAME = P_PATH
EXCEPTIONS
MASK_TOO_LONG = 1
OTHERS = 2.
AT SELECTION-SCREEN.
Set up parameter record
PARMS-UNIX = P_UNIX.
PARMS-PC = P_PC.
PARMS-DS_TYP = P_DS_TYP.
PARMS-JOBNAME = P_JOBNAM.
PARMS-MAPPE = P_MAPPE.
PARMS-HDR_LINES = P_HDRLIN.
*eject
Main Processing Routine *
START-OF-SELECTION.
Initialization
PERFORM 0000_HOUSEKEEPING.
Initialize transaction data in I_BM00
PERFORM 0500_INIT_BMM00.
Process input files
SORT S_PATH BY SIGN OPTION LOW.
MOVE S_PATH-LOW TO PARMS-PATH.
MOVE P_PATH TO PARMS-PATH.
LOOP AT S_PATH.
AT NEW LOW.
CLEAR INP_DATA.
REFRESH INP_DATA.
Read source data into internal table
PERFORM 1000_GET_SOURCE_DATA TABLES INP_DATA.
Processs each record in internal table
ZTEXT = TEXT-007.
ZTEXT+13 = PARMS-DS_NAME.
PERFORM 4000_PROGRESS_INDICATOR USING ZTEXT.
Initialize transaction data in I_BM00
PERFORM 0500_INIT_BMM00.
LOOP AT INP_DATA.
Reset tables for each record
BMM00 = I_BMM00.
BMMH1 = I_BMMH1.
BMMH6 = I_BMMH6.
Load structures with data
MOVE-CORRESPONDING INP_DATA TO BMM00.
PERFORM 2000_WRITE_OUTPUT USING BMM00.
MOVE-CORRESPONDING INP_DATA TO BMMH1.
PERFORM 2000_WRITE_OUTPUT USING BMMH1.
MOVE-CORRESPONDING INP_DATA TO BMMH6.
PERFORM 2000_WRITE_OUTPUT USING BMMH6.
REC_COUNT = REC_COUNT + 1.
ENDLOOP.
ENDAT.
ENDLOOP.
IF REC_COUNT GT 0
AND P_DI_EXE EQ LIT_CHECK.
PERFORM 3000_START_DI_JOB.
ENDIF.
WRITE: / TEXT-008,
REC_COUNT.
PERFORM 9000_END_OF_JOB.
*eject
Include containing common routines used by direct input programs
INCLUDE ZMUSD071.
*eject
FORM 0500_INIT_BMM00 *
Initialize I_BMM00 with transaction code and views selected *
FORM 0500_INIT_BMM00.
***this changes done by samson**
if not inp_data[] is initial.
select single matnr from mara INTO ZMATNR where matnr = inp_data-matnr.
if sy-subrc = 0.
I_BMM00-TCODE = LIT_CHANGE.
Basic data
I_BMM00-XEIK1 = LIT_CHECK.
else.
I_BMM00-TCODE = LIT_CREATE.
Basic data
I_BMM00-XEIK1 = LIT_CHECK.
endif.
endif.
**this changes above done by samson**
Transaction code
I_BMM00-TCODE = LIT_CHANGE.
Basic data
I_BMM00-XEIK1 = LIT_CHECK.
ENDFORM.
INCLUDE ZMUSD069.
*eject
FORM 0000_HOUSEKEEPING *
Initialization routines *
FORM 0000_HOUSEKEEPING.
PERFORM 0010_LDS_NAME.
PERFORM 0020_DS_NAME.
PERFORM 0030_OPEN_FILE.
PERFORM 0040_INIT_STRUCTS.
ENDFORM.
*eject
FORM 0010_LDS_NAME *
Obtain logical file name from DI job details *
FORM 0010_LDS_NAME.
Check valid job name
SELECT SINGLE * FROM TBICU
WHERE JOBNAME EQ PARMS-JOBNAME.
IF SY-SUBRC EQ 0.
CALL FUNCTION 'RS_VARIANT_VALUES_TECH_DATA'
EXPORTING
REPORT = TBICU-REPNAME
VARIANT = TBICU-VARIANT
IMPORTING
TECHN_DATA = VARTECH
TABLES
VARIANT_VALUES = VALUTAB
EXCEPTIONS
VARIANT_NON_EXISTENT = 1
VARIANT_OBSOLETE = 2
OTHERS = 3.
IF SY-SUBRC EQ 0.
READ TABLE VALUTAB WITH KEY 'LDS_NAME'.
MOVE VALUTAB-LOW TO PARMS-LDS_NAME.
ELSE.
MESSAGE I001 WITH PARMS-JOBNAME.
MESSAGE A099.
ENDIF.
ELSE.
MESSAGE I000 WITH PARMS-JOBNAME.
MESSAGE A099.
ENDIF.
ENDFORM.
*eject
FORM 0040_INIT_STRUCTS *
Initialize structures for direct input records *
FORM 0040_INIT_STRUCTS.
Start of standard SAP initialization from example program RMMMBIME
*------- Write session record -
CLEAR BGR00.
BGR00-STYPE = MAPPENSATZ.
BGR00-GROUP = PARMS-MAPPE.
BGR00-NODATA = C_NODATA.
BGR00-MANDT = SY-MANDT.
BGR00-USNAM = SY-UNAME.
BGR00-START = BGR00-NODATA.
BGR00-XKEEP = BGR00-NODATA.
PERFORM 2000_WRITE_OUTPUT USING BGR00.
*----- Initialize structures -
NODATA = BGR00-NODATA.
PERFORM INIT_STRUKTUREN_ERZEUGEN(RMMMBIMI) USING NODATA.
End of standard SAP initialization from example program RMMMBIME
ENDFORM.
*eject.
FORM 3000_START_DI_JOB *
Start direct input job *
FORM 3000_START_DI_JOB.
ZTEXT = 'Starting '(021).
ZTEXT+9 = TBICU-JOBNAME.
PERFORM 4000_PROGRESS_INDICATOR USING ZTEXT.
CALL FUNCTION 'BI_START_JOB'
EXPORTING
JOBID = ' '
JOBTEXT = TBICU-JOBNAME
REPNAME = TBICU-REPNAME
SERVER = TBICU-EXECSERVER
VARIANT = TBICU-VARIANT
NEW_JOB = 'X'
CONTINUE_JOB = ' '
START_IMMEDIATE = 'X'
DO_NOT_PRINT = 'X'
USERNAME = SY-UNAME
IMPORTING
JOBID = ZJOBID
JOBCOUNT = ZJOBCOUNT
EXCEPTIONS
JOB_OPEN_FAILED = 1
JOB_CLOSE_FAILED = 2
JOB_SUBMIT_FAILED = 3
WRONG_PARAMETERS = 4
JOB_DOES_NOT_EXIST = 5
WRONG_STARTTIME_GIVEN = 6
JOB_NOT_RELEASED = 7
WRONG_VARIANT = 8
NO_AUTHORITY = 9
DIALOG_CANCELLED = 10
JOB_ALREADY_EXISTS = 11
PERIODIC_NOT_ALLOWED = 12
ERROR_NUMBER_GET_NEXT = 13
OTHERS = 14.
IF SY-SUBRC EQ 0.
WRITE: / 'Direct input job'(022), TBICU-JOBNAME, 'started'.
ELSE.
WRITE: / 'Direct input failed with return code'(023), SY-SUBRC.
ENDIF.
FORM 0020_DS_NAME.
CALL FUNCTION 'FILE_GET_NAME'
EXPORTING
CLIENT = SY-MANDT
LOGICAL_FILENAME = PARMS-LDS_NAME
OPERATING_SYSTEM = SY-OPSYS
IMPORTING
FILE_NAME = PARMS-DS_NAME
EXCEPTIONS
FILE_NOT_FOUND = 1
OTHERS = 2.
IF SY-SUBRC NE 0.
MESSAGE E002 WITH PARMS-LDS_NAME.
MESSAGE A099.
ENDIF.
ENDFORM.
*eject
FORM 0030_OPEN_FILE *
Open physical file for output *
FORM 0030_OPEN_FILE.
OPEN DATASET PARMS-DS_NAME FOR OUTPUT IN TEXT MODE. "thg191105
OPEN DATASET PARMS-DS_NAME FOR OUTPUT IN TEXT MODE
encoding default. "thg191105
IF SY-SUBRC NE 0.
MESSAGE E003 WITH PARMS-DS_NAME.
MESSAGE A099.
ENDIF.
ENDFORM.
*eject
FORM 1000_GET_SOURCE_DATA *
Read source data into internal table *
--> INP_DATA " Name of internal table passed as parameter *
FORM 1000_GET_SOURCE_DATA TABLES INP_DATA.
CALL FUNCTION 'Z_FILE_UPLOAD'
EXPORTING
UNIX = PARMS-UNIX
PC = PARMS-PC
FILETYPE = PARMS-DS_TYP
FILENAME = PARMS-PATH
HDR_LINES = PARMS-HDR_LINES
TABLES
DATA_TAB = INP_DATA
EXCEPTIONS
CONVERSION_ERROR = 1
FILE_OPEN_ERROR = 2
FILE_READ_ERROR = 3
INVALID_TABLE_WIDTH = 4
INVALID_TYPE = 5
NO_BATCH = 6
UNKNOWN_ERROR = 7
INVALID_SOURCE = 8
OTHERS = 9.
ENDFORM.
*eject
FORM 2000_WRITE_OUTPUT *
Write record in standard SAP structure to UNIX file *
--> I_STRUCT " Name of record passed as parameter *
*FORM 2000_WRITE_OUTPUT USING I_STRUCT."SRY28NOV05
FORM 2000_WRITE_OUTPUT USING I_STRUCT TYPE ANY. "SRY28NOV05
TRANSFER I_STRUCT TO PARMS-DS_NAME.
IF SY-SUBRC NE 0.
MESSAGE E004 WITH PARMS-DS_NAME.
MESSAGE A099.
ENDIF.
ENDFORM.
*eject
*& Form 2100_WS_DOWNLOAD
text *
--> p1 text
<-- p2 text
FORM 2100_WS_DOWNLOAD TABLES INP_DATA.
DATA: FILENAME LIKE RLGRAP-FILENAME. "SRY28NOV05
DATA: W_FILENAME TYPE STRING. "SRY28NOV05
DATA: W_FTYP(10) TYPE C VALUE 'DAT'. "SRY28NOV05
MOVE PARMS-DS_NAME TO FILENAME. "SRY28NOV05
MOVE PARMS-DS_NAME TO W_FILENAME. "SRY28NOV05
*BEGIN OF BLOCK COMMENT BY SRY28NOV05
CALL FUNCTION 'WS_DOWNLOAD'
EXPORTING
BIN_FILESIZE = ' '
CODEPAGE = ' '
FILENAME = FILENAME
FILETYPE = 'DAT'
MODE = ' '
WK1_N_FORMAT = ' '
WK1_N_SIZE = ' '
WK1_T_FORMAT = ' '
WK1_T_SIZE = ' '
COL_SELECT = ' '
COL_SELECTMASK = ' '
importing
filelength =
TABLES
DATA_TAB = INP_DATA
FIELDNAMES =
EXCEPTIONS
FILE_OPEN_ERROR = 1
FILE_WRITE_ERROR = 2
INVALID_FILESIZE = 3
INVALID_TABLE_WIDTH = 4
INVALID_TYPE = 5
NO_BATCH = 6
UNKNOWN_ERROR = 7
OTHERS = 8.
*END OF BLOCK COMMENT BY SRY28NOV05
*BEGIN OF BLOCK ADDED BY SRY28NOV05
CALL FUNCTION 'GUI_DOWNLOAD'
EXPORTING
FILENAME = W_FILENAME
FILETYPE = W_FTYP
TABLES
DATA_TAB = INP_DATA
EXCEPTIONS
FILE_WRITE_ERROR = 1
NO_BATCH = 2
GUI_REFUSE_FILETRANSFER = 3
INVALID_TYPE = 4
NO_AUTHORITY = 5
UNKNOWN_ERROR = 6
HEADER_NOT_ALLOWED = 7
SEPARATOR_NOT_ALLOWED = 8
FILESIZE_NOT_ALLOWED = 9
HEADER_TOO_LONG = 10
DP_ERROR_CREATE = 11
DP_ERROR_SEND = 12
DP_ERROR_WRITE = 13
UNKNOWN_DP_ERROR = 14
ACCESS_DENIED = 15
DP_OUT_OF_MEMORY = 16
DISK_FULL = 17
DP_TIMEOUT = 18
FILE_NOT_FOUND = 19
DATAPROVIDER_EXCEPTION = 20
CONTROL_FLUSH_ERROR = 21
OTHERS = 22.
IF SY-SUBRC NE 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
*END OF BLOCK ADDED BY SRY28NOV05
ENDFORM. " 2100_WS_DOWNLOAD
*eject
FORM 4000_PROGRESS_INDICATOR *
Write progress text to status bar *
--> TEXT " Text passed as parameter *
FORM 4000_PROGRESS_INDICATOR USING TEXT.
CALL FUNCTION 'SAPGUI_PROGRESS_INDICATOR'
EXPORTING
PERCENTAGE = 0
TEXT = TEXT
EXCEPTIONS
OTHERS = 1.
ENDFORM.
*eject.
FORM 9000_END_OF_JOB *
Close files on UNIX *
FORM 9000_END_OF_JOB.
CLOSE DATASET PARMS-DS_NAME.
ENDFORM.
FORM 1000_GET_SOURCE_DATA TABLES INP_DATA.
CALL FUNCTION 'Z_FILE_UPLOAD'
EXPORTING
UNIX = PARMS-UNIX
PC = PARMS-PC
FILETYPE = PARMS-DS_TYP
FILENAME = PARMS-PATH
HDR_LINES = PARMS-HDR_LINES
TABLES
DATA_TAB = INP_DATA
EXCEPTIONS
CONVERSION_ERROR = 1
FILE_OPEN_ERROR = 2
FILE_READ_ERROR = 3
INVALID_TABLE_WIDTH = 4
INVALID_TYPE = 5
NO_BATCH = 6
UNKNOWN_ERROR = 7
INVALID_SOURCE = 8
OTHERS = 9.
ENDFORM.
*eject
FORM 2000_WRITE_OUTPUT *
Write record in standard SAP structure to UNIX file *
--> I_STRUCT " Name of record passed as parameter *
*FORM 2000_WRITE_OUTPUT USING I_STRUCT."SRY28NOV05
FORM 2000_WRITE_OUTPUT USING I_STRUCT TYPE ANY. "SRY28NOV05
TRANSFER I_STRUCT TO PARMS-DS_NAME.
IF SY-SUBRC NE 0.
MESSAGE E004 WITH PARMS-DS_NAME.
MESSAGE A099.
ENDIF.
ENDFORM.
*eject
*& Form 2100_WS_DOWNLOAD
text *
--> p1 text
<-- p2 text
FORM 2100_WS_DOWNLOAD TABLES INP_DATA.Hi,
Thnaks for your reply, This is my requirement.
Here my problem is i am trying to upload the data from flatfile which contain materil number, denominator, Actual UOM, Nominator field values.
Which is the data i need to upload into MM02 and MM01, if material number is new then it has to create the material, if material is already existing it has to update the UOM values.
here i am getting data into my internal table INP_DATA, from that i am trying to upload the data to database by using job name MRP_MATERIAL_MASTER_DATA_UPLOAD with direct input program RMDATIND.
when i execute my program i am getting success message all the records writtin from flatfile to application server. and job started message.
then if i go into sm37 screen there i execute the job it is also giving active message. if i refresh it it is showing job completed message.
then i look at job log status. there i found that for existing material it is expecting material type, for new material it is giving some gravity error.
So could u help me in this it will be gr8.
Thanks & Regards,
RamNV -
ORA-00349: failure obtaining block size for '+Z' in Oracle XE
Hello,
I am attempting to move the online redo log files to a new flash recovery area location created on network drive "Z" ( Oracle Database 10g Express Edition Release 10.2.0.1.0).
When I run @?/sqlplus/admin/movelogs; in SQL*Plus as a local sysdba, I get the following errors:
ERROR at line 1:
ORA-00349: failure obtaining block size for '+Z'
ORA-06512: at line 14
Please let me know how to go about resolving this issue.
Thank you.
See below for detail:
Connected.
SQL> @?/sqlplus/admin/movelogs;
SQL> Rem
SQL> Rem $Header: movelogs.sql 19-jan-2006.00:23:11 banand Exp $
SQL> Rem
SQL> Rem movelogs.sql
SQL> Rem
SQL> Rem Copyright (c) 2006, Oracle. All rights reserved.
SQL> Rem
SQL> Rem NAME
SQL> Rem movelogs.sql - move online logs to new Flash Recovery Area
SQL> Rem
SQL> Rem DESCRIPTION
SQL> Rem This script can be used to move online logs from old online
log
SQL> Rem location to Flash Recovery Area. It assumes that the database
SQL> Rem instance is started with new Flash Recovery Area location.
SQL> Rem
SQL> Rem NOTES
SQL> Rem For use to rename online logs after moving Flash Recovery
Area.
SQL> Rem The script can be executed using following command
SQL> Rem sqlplus '/ as sysdba' @movelogs.sql
SQL> Rem
SQL> Rem MODIFIED (MM/DD/YY)
SQL> Rem banand 01/19/06 - Created
SQL> Rem
SQL>
SQL> SET ECHO ON
SQL> SET FEEDBACK 1
SQL> SET NUMWIDTH 10
SQL> SET LINESIZE 80
SQL> SET TRIMSPOOL ON
SQL> SET TAB OFF
SQL> SET PAGESIZE 100
SQL> declare
2 cursor rlc is
3 select group# grp, thread# thr, bytes/1024 bytes_k
4 from v$log
5 order by 1;
6 stmt varchar2(2048);
7 swtstmt varchar2(1024) := 'alter system switch logfile';
8 ckpstmt varchar2(1024) := 'alter system checkpoint global';
9 begin
10 for rlcRec in rlc loop
11 stmt := 'alter database add logfile thread ' ||
12 rlcRec.thr || ' size ' ||
13 rlcRec.bytes_k || 'K';
14 execute immediate stmt;
15 begin
16 stmt := 'alter database drop logfile group ' || rlcRec.grp;
17 execute immediate stmt;
18 exception
19 when others then
20 execute immediate swtstmt;
21 execute immediate ckpstmt;
22 execute immediate stmt;
23 end;
24 execute immediate swtstmt;
25 end loop;
26 end;
27 /
declare
ERROR at line 1:
ORA-00349: failure obtaining block size for '+Z'
ORA-06512: at line 14
Can someone point me in the right direction as to what I may be doing wrong here - Thank you!888442 wrote:
I am trying to drop and recreate ONLINE redo logs on my STANDB DATABASE (11.1.0.7)., but i am getting the below error.
On primary, we have done the changes., ie we added new logfile with bigger size and 3 members. When trying to do the same on Standby we are getting this error.
Our database is in Active DG Read only mode and the oracle version is 11.1.0.7.
I have deffered the log apply and cancelled the managed recovery, and dg is in manual mode.
SQL> alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M;
alter database Add LOGFILE GROUP 4 ('+DT_DG1','+DT_DG2','+DT_DG3') SIZE 1024M
ERROR at line 1:
ORA-00349: failure obtaining block size for '+DT_DG1'First why you are dropping & recreating online redo log files on standby.
On standby only standby redo log files will be used. Not sure what you are trying to do.
here is example how to create online redo log files, Check that diskgroup is mounted and have sufficient space to create.
sys@ORCL> select member from v$logfile;
MEMBER
C:\ORACLE\ORADATA\ORCL\REDO03.LOG
C:\ORACLE\ORADATA\ORCL\REDO02.LOG
C:\ORACLE\ORADATA\ORCL\REDO01.LOG
sys@ORCL> alter database add logfile group 4 (
2 'C:\ORACLE\ORADATA\ORCL\redo_g01a.log',
3 'C:\ORACLE\ORADATA\ORCL\redo_g01b.log',
4 'C:\ORACLE\ORADATA\ORCL\redo_g01c.log') size 10m;
Database altered.
sys@ORCL> select member from v$logfile;
MEMBER
C:\ORACLE\ORADATA\ORCL\REDO03.LOG
C:\ORACLE\ORADATA\ORCL\REDO02.LOG
C:\ORACLE\ORADATA\ORCL\REDO01.LOG
C:\ORACLE\ORADATA\ORCL\REDO_G01A.LOG
C:\ORACLE\ORADATA\ORCL\REDO_G01B.LOG
C:\ORACLE\ORADATA\ORCL\REDO_G01C.LOG
6 rows selected.
sys@ORCL>
Your profile:-
888442
Newbie
Handle: 888442
Status Level: Newbie
Registered: Sep 29, 2011
Total Posts: 12
Total Questions: 8 (7 unresolved)
Close the threads if answered, Keep the forum clean. -
DASYLAB QUERIES on Sampling Rate and Block Size
HELP!!!! I have been dwelling on DASYLAB for a few weeks regarding certain problems faced, yet hasn't come to any conclusion. Hope that someone would be able to help.Lots of thanks!
1. I need to have more data points, thus I increase the sampling rate(SR). When sampling rate is increased, Block size(BS) will increase correspondingly.
For low sampling rate (SR<100Hz) and Block size of 1, the recorded time in dasy and the real experimental time is the same. But problem starts when SR>100Hz for BS=1. I realized that the recorded time in dasylab differs from the real time. To solve the time difference problem, I've decided to use "AUTO" block size.
Qn1: Is there any way to solve the time difference problem for high SR?
Qn2: For Auto Block Size, Is the recorded result in dasylab at one time moment the actual value or has it been overwritten by the value from the previous block when AUTO BS is chosen.
2. I've tried getting the result for both BS=1 and when BS is auto. Regardless of the sampling rate, the values gotten when BS=1 is always larger than that of Auto Block size. Qn1: Which is the actual result of the test?
Qn2: Is there any best combination of the block size and sampling rate that can be used?
Hope someone is able to help me with the above problem.
Thanks-a-million!!!!!
Message Edited by JasTan on 03-24-2008 05:37 AMGenerally, the DASYLab sampling rate to block size ratio should be between 2:1 and 10:1.
If your sample rate is 1000, the block size should be 500 to no smaller than 100.
Very large block sizes that encompass more than 1 second worth of data often cause display delays that frustrate users.
Very small block sizes that have less than 10 ms of data cause DASYLab to bog down.
Sample rate of 100 samples / second and a block size of 1 is going to cause DASYLab to bog down.
There are many factors that contribute to performance, or lack there of - the speed and on-board buffers of the data acquisition device, the speed, memory, and video capabilities of the computer, and the complexity of the worksheet. As a result, we cannot be more specific, other than to provide you with the rule of thumb above, and suggest that you experiment with various settings, as you have done.
Usually the only reason that you want a small block size is for closed loop control applications. My usual advice is that DASYLab control is around 1 to 10 samples/second. Much faster, and delays start to set in. If you need fast, tight control loops, there are better solutions that don't involve Microsoft Windows and DASYLab.
Q1 - without knowing more about your hardware, I cannot answer the question, but, see above. Keep the block size ratio between 2:1 and 10:1.
Q2 - without knowing more about your hardware, and the driver, I'm not sure that I can fully answer the question. In general, the DASYLab driver instructs the DAQ device driver to program the DAQ device to a certain sampling rate and buffer size. The DASYLab driver then retrieves the data from the intermediate buffers, and feeds it to the DASYLab A/D Input module. If the intermediate buffers are too small, or the sample rate exceeds the capability of the built-in buffers on the hardwar, then data might be overwritten. You should have receive warning or error messages from the driver.
Q3 - See above.
It may be that your hardware driver is not configured correctly. What DAQ device, driver, DASYLab version, and operating system are you using? How much memory do you have? How complex is your worksheet? Are you doing control?
Have you contacted your DASYLab reseller for more help? They should know your hardware better than I do.
- cj
Measurement Computing (MCC) has free technical support. Visit www.mccdaq.com and click on the "Support" tab for all support options, including DASYLab. -
Let's say that I have an RSA key pair that has been generated in a keystore using the keytool utility.
I am now accessing this key pair through some java code (using the Keystore class) and I want to encrypt/decrypt data using this public/private key.
In order to encrypt/decrypt arbitray length data, I need to know the maximum block size that I can encrypt/decrypt.
Based upon my experiment, this block size seems to be the size of the key divided by 8 and minus 11.
But how can I determine all that programatically when the only thing that I have is the keystore?
I did not find a way to figure out the size of the key from the keystore (unless it can be computed from the RSA exponent or modulus, but this is where my knowledged of RSA keys stops) and I did not find a way to figure out where this "magic" number 11 is coming from.
I can always encrypt 1 byte of data and look at the size of the result. This will give me the blocksize and the key size by multiplying it by 8. But it means that I always need the public key around to compute this size (I cannot do it if I have only the private key).
And this is not helping much on the number 11 side.
Am I missing something obvious?
Thanks.It is probably a bug. A naive implementation of RSA key generation that would exhibit this bug would work as follows (I'm ignoring the encrypt and decrypt exponents intentionally):
input: an rsa modulus bit size k, k is even:
output: the rsa modulus n.
k is even, so let k=2*l
step1: generate an l bit prime p, 2^l(-1) < p < 2^l
step2: generate another l bit prime q, 2^l(-1) < q < 2^l
step3: output n = p*q
Now the above might seem reasonable, but when you multiply the inequalities you get
2^(2*l -2) < n < 2^(2l)
That lower bound means that n can be 1 bit smaller than you expect.. The correct smallest lower bound for generating the primes p and q is (2^l) / sqrt(2), rounded up to the nearest integer.
I'll bet the IBM code implements something like the first algorithm. -
LSMW for equipment creation-Standard Batch/Direct Input
Hi,
We developed an LSMW with standard batch / direct input method for creating equipment masters.
We used Object =0400 ( equipment ) and method = 0001 ( batch input)
We maintained source structures and source fileds. In the source field , we maintained only those fields that we need from table IBIPEQUI but in the order given in this structure.
Also we maintained filed mapping and field convertion rules for the above source fields.
When we run LSMW step - Display converted data , we see that
Transactions Read: 1
Records Read: 1
Transactions Written: 0
Records Written: 0
Not sure what could have gone wrong?
Please provide some clues to the following questions.
1) Should the source structure be same as fields from stuc. IBIPEQUI and should it include all the fields in the source structure?
2) Is field mapping required or not ?
3) We are getting an error - transaction is not supported in direct input mode.
Thanks in advance
Rgds,
Rajesh1. Source fields are( same as IBIPEQUI structure ; the tab delimited file matches with these fields)
TCODE C(020) Transaction Code
RECORDNAME C(008) Record name
EQUNR C(018) Equipment
DATSL C(008) Valid On
EQTYP C(001) Equipment category
EQKTX C(040) EQKTX
BEGRU C(004) Authorization Group
EQART C(010) Technical obj. type
GROES C(018) Size/dimensions
INVNR C(025) Inventory number
BRGEW C(017) Gross Weight
GEWEI C(003) Weight unit
ELIEF C(010) Vendor
ANSDT C(008)
ANSWT C(017) Acquisition Value
WAERS C(005) Currency
HERST C(030) Manufacturer
HERLD C(003) Country of manufact.
BAUJJ C(004) Construction year
BAUMM C(002) Construction month
TYPBZ C(020) Model number
SERGE C(030) ManufSerialNumber
MAPAR C(030) ManufactPartNo.
GERNR C(018) Serial number
GWLEN C(008) Warranty end date
KUND1 C(010) Customer
KUND2 C(010) End customer
KUND3 C(010) Operator
SWERK C(004) Maintenance plant
STORT C(010) Location
MSGRP C(008) MSGRP
BEBER C(003) Plant section
ARBPL C(008) Work center
ABCKZ C(001) ABC indicator
EQFNR C(030) Sort field
BUKRS C(004) Company Code
ANLNR C(012) Asset Number
ANLUN C(004) ANLUN
GSBER C(004) Business Area
KOSTL C(010) Cost Center
PROID C(024) PROID
DAUFN C(012) Standing order
AUFNR C(012) Order
TIDNR C(025) Technical IdentNo.
SUBMT C(018) Construction type
HEQUI C(018) Superord. Equipment
HEQNR C(004) Position
EINZL C(001) Single installation
IWERK C(004) Planning plant
INGRP C(003) Planner group
GEWRK C(008) Main work center
WERGW C(004) Plant for WorkCenter
RBNR C(009) Catalog profile
TPLNR C(030) Functional Location
DISMANTLE C(001) DismIndic.
VKORG C(004) Sales Organization
VTWEG C(002) Distribution Channel
SPART C(002) Division
MATNR C(018) Material
SERNR C(018) BOM explosion number
WERK C(004) WERK
LAGER C(004) LAGER
CHARGE C(010) CHARGE
KUNDE C(010)
KZKBL C(001) Load records
PLANV C(003) PLANV
FGRU1 C(004) FGRU1
FGRU2 C(004) FGRU2
STEUF C(004) Control key
STEUF_REF C(001) STEUF_REF
KTSCH C(007) Standard text key
KTSCH_REF C(001) Std text referenced
EWFORM C(006) EWFORM
EWFORM_REF C(001) EWFORM_REF
BZOFFB C(002) Ref. date for start
BZOFFB_REF C(001) BZOFFB_REF
OFFSTB C(007) Offset to start
EHOFFB C(003) Unit
OFFSTB_REF C(001) OFFSTB_REF
BZOFFE C(002) Ref. date for finish
BZOFFE_REF C(001) BZOFFE_REF
OFFSTE C(007) Offset to finish
EHOFFE C(003) Unit
OFFSTE_REF C(001) OFFSTE_REF
WARPL C(012) Maintenance Plan
IMRC_POINT C(012) Measuring point
INDAT C(008) Inverse date
INTIM C(006) Processing time OC Workbe
INBDT C(008) Start-up date
GWLDT C(008) Guarantee
AULDT C(008) Delivery date
LIZNR C(020) License number
MGANR C(020) Master warranty
REFMA C(018) REFMA
VKBUR C(004) Sales Office
VKGRP C(003) Sales Group
WARR_INBD C(001) Inbound warranty
WAGET C(001) Warranty inheritance poss
GAERB C(001) Indicator: Pass on warran
ACT_CHANGE_AA C(001) ACT_CHANGE_AA
STRNO C(040) STRNO
DATLWB C(008) Date Last Goods Movmnt
UII C(072) UII
IUID_TYPE C(010) IUID Type
UII_PLANT C(004) Plant Responsible for UII
2, source structure is assigned to target structure IBIPEQUI
3. In the assign files step - all settings given above are correctly maintained.
4. Field mapping
TCODE Transaction Code
Rule : Default Settings
Code: IBIPEQUI-TCODE = 'IE01'.
RECORDNAME IBIP: Name of the Data Transfer Record
Rule : Default Settings
Code: IBIPEQUI-RECORDNAME = 'IBIPEQUI'.
EQUNR Equipment Number
Source: ZIE01_002_SOURCE-EQUNR (Equipment)
Rule : Transfer (MOVE)
Code: if not ZIE01_002_SOURCE-EQUNR is initial.
IBIPEQUI-EQUNR = ZIE01_002_SOURCE-EQUNR.
endif.
DATSL Date valid from
Source: ZIE01_002_SOURCE-DATSL (Valid On)
Rule : Transfer (MOVE)
Code: if not ZIE01_002_SOURCE-DATSL is initial.
IBIPEQUI-DATSL = ZIE01_002_SOURCE-DATSL.
endif.
EQTYP Equipment category
Source: ZIE01_002_SOURCE-EQTYP (Equipment category)
Rule : Transfer (MOVE)
Code: if not ZIE01_002_SOURCE-EQTYP is initial.
IBIPEQUI-EQTYP = ZIE01_002_SOURCE-EQTYP.
endif
When I read data with 1 record uploaded ( in the assign fields step , I did not choose "fields names at start of file" and also saved the file with tab delimited text format without field names), it shows as
Transactions Read: 2
Records Read: 2
Transactions Written: 2
Records Written: 2
I uploaded only one record but it reads as 2 records and can not figure out why 2 records?
Also when I checked display converted data , it shows 2 records .
First record shows
TCODE Transaction Code IE01
RECORDNAME IBIP: Name of the Data Transfer Record IBIPEQUI
EQUNR Equipment Number
DATSL Date valid from 05072010
EQTYP Equipment category H
EQKTX Description of technical object PNEUMATIC PIPE BENDER
BEGRU Technical object authorization group
EQART Type of Technical Object MECH-PRESS
GROES Size/dimension 1000X500X1500MM
INVNR Inventory number
BRGEW Gross Weight : IBIP Character Structure 50
GEWEI Weight Unit KG
ELIEF Vendor number
ANSDT Acquisition date
ANSWT Acquisition Value: IBIP Character Structure
All the fields following this , are blank.
2nd record shows
TCODE Transaction Code IE01
RECORDNAME IBIP: Name of the Data Transfer Record IBIPEQUI
EQUNR Equipment Number 2009
DATSL Date valid from
EQTYP Equipment category S
EQKTX Description of technical object 1006324
BEGRU Technical object authorization group
EQART Type of Technical Object
GROES Size/dimension 20100406
uploaded values are jumbled in 1st and 2nd record.
Hope to receive your valuable ideas for finding out the reason and corrective action required.
Rgds,
Rajesh
I
Edited by: Rajesh63 on Jul 6, 2010 10:37 PM -
Dear Experts,
How to know the block size of a table in DB2?
Currently, we are performing 'move table' to a new tablespace. The process has spent over 30 hours for only one table which have large size and big block. Before doing 'move table' job via DB6CONV, we do not know how many the block size is?
We have to know its block size in order to determine the time consumed by system in the future, so we can predict the down time, because as we monitored, 1 block consumed approximately 20 to 30 seconds.
We use ECC6, DB2 ver 8.2 and AIX 5.3.
Need your quick response.
Thanks and Regards,
RudiHi Diane,
which DB2 version do you use? Please post db2level.
Did you copy the right Version (32-bit x86 folder ntintel, 64-bit x64 ntamd64)?
Can you catalogue the stored procedure manually? Try:
db2 "CREATE PROCEDURE SAPTOOLS.ONLINE_TABLE_MOVE(
IN TABSCHEMA VARCHAR(128),
IN TABNAME VARCHAR(128),
IN DATA_TBSP VARCHAR(128),
IN INDEX_TBSP VARCHAR(128),
IN LOB_TBSP VARCHAR(128),
IN MDC_COLUMNS VARCHAR(32672),
IN PARTKEY_COLS VARCHAR(32672),
IN OPERATION VARCHAR(128)
SPECIFIC ONLINE_TABLE_MOVE
DYNAMIC RESULT SETS 1
MODIFIES SQL DATA
NOT DETERMINISTIC
CALLED ON NULL INPUT
LANGUAGE C
EXTERNAL NAME 'online_table_move_sp!online_table_move'
FENCED THREADSAFE
PARAMETER STYLE SQL
PROGRAM TYPE SUB
DBINFO"
If you still get problems, please open a separate thread. This one is DB2 Block Size related.
regards Siegfried -
CRM - Modifying block size of Adapter Object (R3AC1)
Hi,
We have differences between SAP R/3 and CRM systems. Not all the business parters are in sync between two systems. In order to bring business parter tables in sync, we are executing R3AR2/R3AR4 for a range of 10,000 BPs at a time.
My question is, we are doing this by with the block size of adapter object BUPA_MAIN to 1. This is resulting into creation of 1 queue per business parter. Is there any harm with keeping the block size to 1 ?
We attempted to do it with block size 100 but for some reason, it's not working fine. With block size 100, not all the BPs from input range are getting selected for replication.
Thanks,
AmolHi Amol - For the MATERIAL ADAPTER we are using MARA-MTART criteria, but also want to apply MARC-WERKS.
In order to stop the entire MATERIAL record from being downloaded, we used BTE OPEN_FI_PERFORM_CRM0_200_P and wrote a function module to interrogate the MARC table for the MATNR.
If the MATNR was not extended to a specific PLANT, then the record was not downloaded to CRM.
However, I discovered that with the MATERIAL BLKSIZE = 100, there were some records that did not meet the criteria slipping through to CRM.
So I made the BLKSIZE = 1, and the correct filtering is occurring! I'm not sure why, but I suspect there is something in the CRS_SEND_TO_SERVER function module in ECC, that is not looping properly. And it works just fine when there is just one record at a time. -
Why my Input/output size of backup increased
I have a following rman backup script
run {
backup as compressed backupset incremental level 1 cumulative device type disk tag 'Baan_bkup$LEVEL_0' data
base;
recover copy of database;
backup device type disk tag 'Baan_bkup$LEVEL_0' archivelog all not backed up;
backup archivelog until time 'sysdate-3' delete all input;
crosscheck backupset;
crosscheck archivelog all;
delete noprompt obsolete;
delete noprompt expired backup;
running once daily for almost three weeks. the first 9 days, the input/output sizes were fractional (ex 3G/88M), then it switched to ~16G/1.3G and stayed that level. Because this is test DB server, basically there is no activity on it except the mentainace activity by DB itself. So how did it happen to cause the size change? The real backup sets only maintain at 3 days level as the script. My database size is 16.88G of 10g R2 on RH EL3That is the behavior in 9i and 10g1. In 10gR2 things change.
Unused Block Compression Of Datafile Backups to Backup Sets
When backing up datafiles into backup sets, RMAN does not back up the contents of data blocks that have never been allocated. (In previous releases, this behavior was referred to as NULL compression.)
RMAN also skips other datafile blocks that do not currently contain data, if all of the following conditions apply:
The COMPATIBLE initialization parameter is set to 10.2
There are currently no guaranteed restore points defined for the database
The datafile is locally managed
The datafile is being backed up to a backup set as part of a full backup or a level 0 incremental backup
The backup set is being created on disk.
http://download-east.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta009.htm#RCMRF98765 -
How get os physical block size ?
man dd, it advice bs parameter be a multiple of the physical block size .
How can I get physical block size ? such as aix, hpux ,suselinux.
bs=BlockSize
Specifies both the input and output block size, superseding the ibs and obs flags. The block size values
specified with the bs flag must always be a multiple of the physical block size for the media being used.Learning new things today, thanks guys!
$ printf "a" >1bytefile.txt && echo $(( 512 * $(du 1bytefile.txt | cut -f1) )) && rm 1bytefile.txt
1024
$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.4.0 - Production on Tue Jul 24 16:15:04 2012
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SYS@TTST> select
2 max(l.lebsz) log_block_size
3 from
4 sys.x$kccle l
5 where
6 l.inst_id = userenv('Instance');
LOG_BLOCK_SIZE
1024
SYS@TTST> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
$ /usr/sbin/fstyp -v /dev/vg01/lvol04
/dev/vg01/lvol04: Permission denied
$ su
Password:
# /usr/sbin/fstyp -v /dev/vg01/lvol04
vxfs
version: 5
f_bsize: 8192
f_frsize: 1024
f_blocks: 786432000
f_bfree: 87190098
f_bavail: 81740717
f_files: 22035604
f_ffree: 21797524
f_favail: 21797524
f_fsid: 1073807364
f_basetype: vxfs
f_namemax: 254
f_magic: a501fcf5
f_featurebits: 0
f_flag: 16
f_fsindex: 10
f_size: 786432000
# uname -r
B.11.23
# uname
HP-UXMore info over in the hp communities. -
Concerning oracle single block io call and os block size!!
Hi, all
The block size of my db(10gr2) is 8k, and
the db is on the raw device file system on a aix machine.
OS block size is 512k.
How much block will be read from disk when a single oracle io call occurs?
Which one is currect, 8K or 512K?
If the block size for a single io call depends on the os block size (512k),
I think os block size need to be tuned for oracle block size (8k).
If we use raw device file system, os level block size has no meaning??
Thanks in advance.
Best Regards.Hi,
Please refer to the following oracle doc:
http://docs.oracle.com/cd/B28359_01/server.111/b32009/appa_aix.htm
extract from it:
Setting the Database Block Size
You can configure Oracle Database block size for better Input-Output throughput. On AIX, you can set the value of the DB_BLOCK_SIZE initialization parameter to between 2 KB and 32 KB, with a default of 4 KB. If Oracle Database is installed on a journaled file system, then the block size should be a multiple of the file system block size (4 KB on JFS, 16 K to 1 MB on GPFS). For databases on raw partitions, Oracle Database block size is a multiple of the operating system physical block size (512 bytes on AIX).
Thanks and Regards,
Raj K. -
Min, Max Block Size, Default Blocksize ??
Hi All,
Can any one please explain me the exact meaning of the Min Block size, Max Blocksize that we maintain under the "Block size for DP/SNP Parallel Processing Profile ".
Please let me know the application of the same. Why do we maintain this setting ???
What does it mean by a BLOCK SIZE ???
Any input on the same is highly appreciated.
Thanks in Advance,
Prasad.Hi Prasad,
Block Size is the number of parallel processes that are being executed in background during the application. This is normally a configuration activity to be configured in line with basis.
In Block size, we enter the number of objects to be processed per block by the CIF comparison/reconciliation during data selection in SAP APO or in the partner system.
If you increase the block size, the memory required also increases. This has a positive effect on performance. If processes are cancelled due to lack of memory, you can decrease the block size to save memory.
If you do not enter anything here, the system determines the block size dynamically from the number of objects that actually exist and the maximum number of work processes available.
Normally, when you execute a job in background, it picks up the application server automatically or by manually defined server. In parallel processing, at a time, one or more job of the same identify can be triggered under this scenario by defining application servers. But too many parallel processing activity will affect the performance.
One needs to define the parallel processes also to control system behaviour. The Parallel processing profile is defined for parallel processing of background jobs. You then assign these profiles to variants in the applications.
Regards
R. Senthil Mareeswaran. -
CCPWriteDAQ Multibyte/Singlebyte block size
The CCPWriteDAQ.vi relies on the ECU supporting multibyte block sizes. What option is there for ECU's only supporting single byte block size.
Eg. A signal list input for CCPWriteDAQ.vi consists of a signal defined with the size 4 bytes. Previously using my own CCP functions written with Frame API's, I would have to call CCPSetDAQPtr then CCPWriteDAQ four times with block size always set to 1. Now I call them only once, where CCPWriteDAQ sets the block size to four. The CCP handler in my ECU does not support this; is there a way around this problem without modifying the CCP handler in my ECU.
I dont think it is possible to delibrately modify the signal list as this would mess up the decoding of the signal later when calling the CAN read function.With the Frame API's, I do not believe I coded the application in the most efficient way, particularly when decoding the messages from the CAN frames relating to the DAQ DTO's. I had struggled using many while loops, queues and occurrences to split all the DAQ DTO's into 10 groups (since I have 10 events) and then decoding the messages in each group which are encompassed across 3 CAN frames (where a message can be spanned across 2 CAN frames, thus more efficient use of available memory), thus I have to wait for transmission of all 3 CAN frames before decoding the messages. The decoding of messages is the main issue I have really.
This gave me some problems with buffer overload, queue overload although I did look into some support papers regarding memory management and performance issues. I have not really looked into using C functions and have always coded in LabVIEW blocks, do you think this would improve performance of some of my code?
One other issue I wanted to check, I assume the CCP functions do not span messages across CAN frames as mentioned above. For example an ODT table will have 7 blocks of 1byte data and if you set the pointer to this ODT table and write data such that it then exceeds the 7bytes, I assume it will give an error.
As a comparison, we have always used Vector CANape for ECU data acquisition and calibration which we have no issues, but for my purpose now, I need to use LabVIEW to perform in almost the same way.
I dont have the CCP version which we are using at the moment.
Biker 2000, I am out of the country early next month but let me know what your email address is and I can contact you. Thanks. -
Finding appropriate block size?
Hi All,
I believe this might be basic question, How to find appropriate block size for building an database to an specific application?
I had seen always default 8K block size is used every where(Around 300-350 databases i have seen till now)....but why and how do they estimate this block size blindly before creating production database.
Also in the same way how memory settings are finalized before creating database?
-YasserYasser,
I have been very fortunate to buy and read several very high quality Oracle books which not only correctly state the way something works, but also manage to provide a logical, reasoned explanation for why things happen as they do, when it is appropriate, and when it is not. While not the first book I read on the topic of Oracle, the book “Oracle Performance Tuning 101” by Gaja Vaidyanatha marked the start of logical reasoning in performance tuning exercises for me. A couple years later I learned that Gaja was a member of the Oaktable Network. I read the book “Expert Oracle One on One” by Tom Kyte and was impressed with the test cases presented in the book which help readers understand the logic of why Oracle behaves as it does, and I also enjoyed the performance tuning stories in the book. A couple years later I found Tom Kyte’s “Expert Oracle Database Architecture” book at a book store and bought it without a second thought; some repetition from his previous book, fewer performance tuning storing, but a lot of great, logically reasoned information. A couple years later I learned that Tom was a member of the Oaktable Network. I read the book “Optimizing Oracle Performance” by Cary Millsap, a book that once again marked a distinct turning point in the method I used for performance tuning – the logic made all of the book easy to understand. A couple years later I learned that Cary was a member of the Oaktable Network. I read the book “Cost-Based Oracle Fundamentals” by Jonathan Lewis, a book by its title seemed to be too much of a beginner’s book until I read the review by Tom Kyte. Needless to say, the book also marked a turning point in the way I approach problem solving through logical reasoning, asking and answering the question – “What is Oracle thinking”. Jonathan is a member of the Oaktable Network, a pattern is starting to develop here. At this point I started looking for anything written in book or blog form by members of the Oaktable Network. I found Richard Foote’s blog, which some how managed to make Oracle indexes interesting for me - probably through the use of logic and test cases which allowed me to reproduce what I reading about. I found Jonathan Lewis’ blog, which covers so many interesting topics about Oracle, all of which leverage logical approaches to help understanding. I also found the blogs of Kevin Closson, Greg Rahn, Tanel Poder, and a number of other members of the Oaktable Network. The draw to the performance tuning side of Oracle administration was primarily for a search for the elusive condition known as Compulsive Tuning Disorder, which was coined in the book written by Gaja. There were, of course, many other books which contributed to my knowledge – I reviewed at least 8 of the Oracle related books on the amazon.com website.
Motivation… it is interesting to read what people write about Oracle. Sometimes what is written directly contradicts what one knows about Oracle. In such cases, it may be a fun exercise to determine if what was written is correct (and why it is logically correct), or why it is wrong (and why it is logically incorrect). Take, for example, the “Top 5 Timed Events” seen in this book (no, I have not read this book, I bumped into it a couple times when performing Google searches):
http://books.google.com/books?id=bxHDtttb0ZAC&pg=PA17#v=onepage&q=&f=false
The text of the book states that the “Top 5 Timed Events” shown indicates a CPU Constrained Database (side note: if a database is a series of files stored physically on a disk, can it ever be CPU constrained?). From the “Top 5 Timed Events”, we see that there were 4,851 waits on the CPU for a total time of 4,042 seconds, and this represented 55.76% of the wait time. Someone reading the book might be left thinking one of:
* “That obviously means that the CPU is overwhelmed!”
* “Wow 4,851 wait events on the CPU, that sure is a lot!”
* “Wow wait events on the CPU, I didn’t know that was possible?”
* “Hey, something is wrong with this ‘Top 5 Timed Events’ output as Oracle never reports the number of waits on CPU.”
* “Something is really wrong with this ‘Top 5 Timed Events’ output as we do not know the number of CPUs in the server (what if there are 32 CPUs), the time range of the statics, and why the average time for a single block read is more than a second!”
A Google search then might take place to determine if anyone else reports the number of waits for the CPU in an Oracle instance:
http://www.google.com/search?num=100&q=Event+Waits+Time+CPU+time+4%2C851+4%2C042
So, it must be correct… or is it? What does the documentation show?
Another page from the same book:
http://books.google.com/books?id=bxHDtttb0ZAC&pg=PA28#v=onepage&q=&f=false
Shows the command:
alter system set optimizer_index_cost_adj=20 scope = pfile;Someone reading the book might be left thinking one of:
* That looks like an easy to implement solution.
* I thought that it was only possible to alter parameters in the spfile with an ALTER SYSTEM command, neat.
* That command will never execute, and should return an “ORA-00922: missing or invalid option” error.
* Why would the author suggest a value of 20 for OPTIMIZER_INDEX_COST_ADJ and not 1, 5, 10, 12, 50, or 100? Are there any side effects? Why isn’t the author recommending the use of system (CPU) statistics to correct the cost of full table scans?
A Google search finds this book (I have not read this book either, just bumped into it during a search) by a different author which also shows that it is possible to alter the pfile through an ALTER SYSTEM command:
http://books.google.com/books?id=ufz5-hXw2_UC&pg=PA158#v=onepage&q=&f=false
So, it must be correct… or is it? What does the documentation show?
Regarding the question of updating my knowledge, I read a lot of books on a wide range of subjects including Oracle, programming, Windows and Linux administration, ERP systems, Microsoft Exchange, telephone systems, etc. I also try to follow Oracle blogs and answer questions in this and other forums (there are a lot of very smart people out there contributing to forums, and I feel fortunate to learn from those people). As long as the book or blog offers logical reasoning, it is fairly easy to tie new material into one’s pre-existing knowledge.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Maybe you are looking for
-
EngLab - Open source mathematical/engineering platform
Hello all, I'm new to Archlinux and quite exited with it. Anyway, some colleagues of mine from the University and me have created an engineering platform for the Linux platform, although Windows builds are also available. If you like check it out, bu
-
Re-Order Voyager Dimension Member
I've created reports in Voyager Workspace based on MSAS 2005 cubes. the dimension members are appearing as per the dimension structure in MSAS and I'm unable to re-order the dimension members. Is there a way to re-order the dimension members in Voyag
-
5 minute Auto-Lock not engaging on certain webpages
Hi Apple Support Forums, I have been testing iPads in our organisation for a few weeks now, and one of our iPad users discovered a fairly major bug in iOS 4.2 If: (Mobile)Safari is left open with either: adelaidenow.com.au or theaustralian.com.au (an
-
Networkd configuration permissions
I'm trying to configure networkd with a static ip. The configuration works like intended until I move the config file to a separate partition and symlink the file to /etc/systemd/network/10-staticwire.network (same place and filename it had before mo
-
How to download Taxonomies For XBRL to develop reports in Hyperion FR?
Hi All ! Can anyone pls help me in developing Reports in Hyperion Financial Reporting Using XBRL?