How to import BW 3.5 infocube data in to flat files
Hi All,
We are on BW 3.5. As per the requirement user wants Info cube data to be dumped in excel sheets. To achieve this we created infospoke in BW, but when i execute this info spoke, it is resulting in short dump stating memory issue, we checked with basis team and got update saying they have restrictions in increasing memory.other side this infocube contains huge historical data nearly 5 crore records only for four years (2004 to 2008). keeping this in mind, we tried giving selection at infospoke level on available time characteristics (0calyear, 0calquarter, 0calmonth), but again it is throwing same error.
We also changed destinations (DB table, File, application server) in info spoke, but all the cases we are facing same short dump.Can anybody give an idea how to download this huge data from the infocube. issue is become top priority now, as we need to complete this task in week duration.
Regards,
Sreehari.
Your problem is too much data.
The applications servers running BW have much larger amounts of memory available than most PCs. If the InfoSpoke process is exhausting the memory on the application server, the chances are any excel file produced will be so huge, that no PC will be able to load it, and in any case you'll exceed the capacity in terms of rows and columns of the Excel product.
Even if Excel can handle such vast amounts of data, and you were able to get all the data out of the cube. to effectively analyse the data in Excel would be impossible. That is why BW (and data warehouses) exist - to handle effectively vast amounts of data.
It seems to me that your user's requirement is ridiculous. They do not have a requirement, they only think they do. Push back - the user clearly has no idea what he or she is really asking for.
Similar Messages
-
How to skip first record while inserting data from a flat file to BW system
Hi Experts,
In my project we have to upload flat file into a BW system. I have written a program and it is working fine.
Now we have got another requirement. The flat file will have a header record (first row). While uploading the flat file we have to skip this record. How I can do so?
The code is as below:
FORM upload1.
DATA : wf_title TYPE string,
lt_filetab TYPE filetable,
l_separator TYPE char01,
l_action TYPE i,
l_count TYPE i,
ls_filetab TYPE file_table,
wf_delemt TYPE rollname,
wa_fieldcat TYPE lvc_s_fcat,
tb_fieldcat TYPE lvc_t_fcat,
rows_read TYPE i,
p_error TYPE char01,
l_file TYPE string.
DATA: wf_object(30) TYPE c,
wf_tablnm TYPE rsdchkview.
wf_object = 'myprogram'.
DATA i TYPE i.
DATA:
lr_mdmt TYPE REF TO cl_rsdmd_mdmt,
lr_mdmtr TYPE REF TO cl_rsdmd_mdmtr,
lt_idocstate TYPE rsarr_t_idocstate,
lv_subrc TYPE sysubrc.
TYPES : BEGIN OF test_struc,
/bic/myprogram TYPE /bic/oimyprogram,
txtmd TYPE rstxtmd,
END OF test_struc.
DATA : tb_assum TYPE TABLE OF /bic/pmyprogram.
DATA: wa_ztext TYPE /bic/tmyprogram,
myprogram_temp TYPE ziott_assum,
wa_myprogram TYPE /bic/pmyprogram.
DATA : test_upload TYPE STANDARD TABLE OF test_struc,
wa2 TYPE test_struc.
DATA : wa_test_upload TYPE test_struc,
ztable_data TYPE TABLE OF /bic/pmyprogram,
ztable_text TYPE TABLE OF /bic/tmyprogram,
wa_upld_text TYPE /bic/tmyprogram,
wa_upld_data TYPE /bic/pmyprogram,
t_assum TYPE ziott_assum.
DATA : wa1 LIKE test_upload.
wf_title = text-026.
CALL METHOD cl_gui_frontend_services=>file_open_dialog
EXPORTING
window_title = wf_title
default_extension = 'txt'
file_filter = 'Tab delimited Text Files (*.txt)'
CHANGING
file_table = lt_filetab
rc = l_count
user_action = l_action
EXCEPTIONS
file_open_dialog_failed = 1
cntl_error = 2
OTHERS = 3. "#EC NOTEXT
IF sy-subrc <> 0.
EXIT.
ENDIF.
LOOP AT lt_filetab INTO ls_filetab.
l_file = ls_filetab.
ENDLOOP.
CHECK l_action = 0.
IF l_file IS INITIAL.
EXIT.
ENDIF.
l_separator = 'X'.
wa_fieldcat-fieldname = 'test'.
wa_fieldcat-dd_roll = wf_delemt.
APPEND wa_fieldcat TO tb_fieldcat.
CALL FUNCTION 'MESSAGES_INITIALIZE'.
CLEAR wa_test_upload.
Upload file from front-end (PC)
File format is tab-delimited ASCII
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = l_file
has_field_separator = l_separator
TABLES
data_tab = i_mara
data_tab = test_upload
EXCEPTIONS
file_open_error = 1
file_read_error = 2
no_batch = 3
gui_refuse_filetransfer = 4
invalid_type = 5
no_authority = 6
unknown_error = 7
bad_data_format = 8
header_not_allowed = 9
separator_not_allowed = 10
header_too_long = 11
unknown_dp_error = 12
access_denied = 13
dp_out_of_memory = 14
disk_full = 15
dp_timeout = 16
OTHERS = 17.
IF sy-subrc <> 0.
EXIT.
ELSE.
CALL FUNCTION 'MESSAGES_INITIALIZE'.
IF test_upload IS NOT INITIAL.
DESCRIBE TABLE test_upload LINES rows_read.
CLEAR : wa_test_upload,wa_upld_data.
LOOP AT test_upload INTO wa_test_upload.
CLEAR : p_error.
rows_read = sy-tabix.
IF wa_test_upload-/bic/myprogram IS INITIAL.
p_error = 'X'.
MESSAGE s153 WITH wa_test_upload-/bic/myprogram sy-tabix.
CONTINUE.
ELSE.
TRANSLATE wa_test_upload-/bic/myprogram TO UPPER CASE.
wa_upld_text-txtmd = wa_test_upload-txtmd.
wa_upld_text-txtsh = wa_test_upload-txtmd.
wa_upld_text-langu = sy-langu.
wa_upld_data-chrt_accts = 'xyz1'.
wa_upld_data-co_area = '12'.
wa_upld_data-/bic/zxyzbcsg = 'Iy'.
wa_upld_data-objvers = 'A'.
wa_upld_data-changed = 'I'.
wa_upld_data-/bic/zass_mdl = 'rrr'.
wa_upld_data-/bic/zass_typ = 'I'.
wa_upld_data-/bic/zdriver = 'yyy'.
wa_upld_text-langu = sy-langu.
MOVE-CORRESPONDING wa_test_upload TO wa_upld_data.
MOVE-CORRESPONDING wa_test_upload TO wa_upld_text.
APPEND wa_upld_data TO ztable_data.
APPEND wa_upld_text TO ztable_text.
ENDIF.
ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ztable_data.
DELETE ADJACENT DUPLICATES FROM ztable_text.
IF ztable_data IS NOT INITIAL.
CALL METHOD cl_rsdmd_mdmt=>factory
EXPORTING
i_chabasnm = 'myprogram'
IMPORTING
e_r_mdmt = lr_mdmt
EXCEPTIONS
invalid_iobjnm = 1
OTHERS = 2.
CALL FUNCTION 'MESSAGES_INITIALIZE'.
**Lock the Infoobject to update
CALL FUNCTION 'RSDG_IOBJ_ENQUEUE'
EXPORTING
i_objnm = wf_object
i_scope = '1'
i_msgty = rs_c_error
EXCEPTIONS
foreign_lock = 1
sys_failure = 2.
IF sy-subrc = 1.
MESSAGE i107(zddd_rr) WITH wf_object sy-msgv2.
EXIT.
ELSEIF sy-subrc = 2.
MESSAGE i108(zddd_rr) WITH wf_object.
EXIT.
ENDIF.
*****Update Master Table
IF ztable_data IS NOT INITIAL.
CALL FUNCTION 'RSDMD_WRITE_ATTRIBUTES_TEXTS'
EXPORTING
i_iobjnm = 'myprogram'
i_tabclass = 'M'
I_T_ATTR = lt_attr
TABLES
i_t_table = ztable_data
EXCEPTIONS
attribute_name_error = 1
iobj_not_found = 2
generate_program_error = 3
OTHERS = 4.
IF sy-subrc <> 0.
CALL FUNCTION 'MESSAGE_STORE'
EXPORTING
arbgb = 'zddd_rr'
msgty = 'E'
txtnr = '054'
msgv1 = text-033
EXCEPTIONS
OTHERS = 3.
MESSAGE e054(zddd_rr) WITH 'myprogram'.
ELSE.
CALL FUNCTION 'MESSAGE_STORE'
EXPORTING
arbgb = 'zddd_rr'
msgty = 'S'
txtnr = '053'
msgv1 = text-033
EXCEPTIONS
OTHERS = 3.
ENDIF.
*endif.
*****update Text Table
IF ztable_text IS NOT INITIAL.
CALL FUNCTION 'RSDMD_WRITE_ATTRIBUTES_TEXTS'
EXPORTING
i_iobjnm = 'myprogram'
i_tabclass = 'T'
TABLES
i_t_table = ztable_text
EXCEPTIONS
attribute_name_error = 1
iobj_not_found = 2
generate_program_error = 3
OTHERS = 4.
IF sy-subrc <> 0.
CALL FUNCTION 'MESSAGE_STORE'
EXPORTING
arbgb = 'zddd_rr'
msgty = 'E'
txtnr = '055'
msgv1 = text-033
EXCEPTIONS
OTHERS = 3.
ENDIF.
ENDIF.
ELSE.
MESSAGE s178(zddd_rr).
ENDIF.
ENDIF.
COMMIT WORK.
CALL FUNCTION 'RSD_CHKTAB_GET_FOR_CHA_BAS'
EXPORTING
i_chabasnm = 'myprogram'
IMPORTING
e_chktab = wf_tablnm
EXCEPTIONS
name_error = 1.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
****Release locks on Infoobject
CALL FUNCTION 'RSDG_IOBJ_DEQUEUE'
EXPORTING
i_objnm = 'myprogram'
i_scope = '1'.
ENDIF.
ENDIF.
PERFORM data_selection .
PERFORM update_alv_grid_display.
CALL FUNCTION 'MESSAGES_SHOW'.
ENDFORM.
Please let me know how I can skip first record of the flat file?
Regards,
Sgo through this hope u can get some idea
REPORT ztest no standard page heading line-size 255.
Declaration *
TYPES t_itab1 TYPE alsmex_tabline.
types: begin of t_csks,
kostl like csks-kostl,
end of t_csks.
types: begin of t_cska,
kstar like cska-kstar,
end of t_cska.
data: begin of t_flatfile,
docdate like COHEADER-BLDAT,
postdate like COHEADER-BUDAT,
doctext like COHEADER-BLTXT,
costele like RK23F-KSTAR,
amount like RK23F-WTGBTR,
scostctr like RK23F-SKOSTL,
rcostctr like RK23F-EKOSTL,
rintorder like RK23F-EAUFNR,
end of t_flatfile.
data: begin of t_flatfile1,
docdate like COHEADER-BLDAT,
postdate like COHEADER-BUDAT,
doctext like COHEADER-BLTXT,
costele like RK23F-KSTAR,
amount like RK23F-WTGBTR,
scostctr like RK23F-SKOSTL,
rcostctr like RK23F-EKOSTL,
rintorder like RK23F-EAUFNR,
NUM LIKE SY-INDEX,
end of t_flatfile1.
data: itab like table of t_flatfile with header line.
data: itab2 like table of t_flatfile1 with header line.
DATA: it_itab1 TYPE STANDARD TABLE OF t_itab1 WITH HEADER LINE,
MESSTAB1 LIKE BDCMSGCOLL OCCURS 0 WITH HEADER LINE,
MESSTAB LIKE BDCMSGCOLL OCCURS 0 WITH HEADER LINE.
data: begin of bdcdata occurs 0.
include structure bdcdata.
data: end of bdcdata.
data:t_lin type i VALUE '0',
u_rec type i VALUE '0',
s_rec type i VALUE '0'.
data: it_csks type standard table of t_csks,
wa_csks type t_csks.
data: it_cska type standard table of t_cska,
wa_cska type t_cska.
*Selection Screen
SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-010.
parameters: p_docdat LIKE COHEADER-BLDAT obligatory,
p_postda LIKE COHEADER-BUDAT obligatory,
p_doctxt LIKE COHEADER-BLTXT.
SELECTION-SCREEN END OF BLOCK b1.
SELECTION-SCREEN BEGIN OF BLOCK b2 WITH FRAME TITLE text-011.
parameters: p_file LIKE RLGRAP-FILENAME obligatory,
DIS_MODE LIKE CTU_PARAMS-DISMODE DEFAULT 'N'.
SELECTION-SCREEN END OF BLOCK b2.
A T S E L E C T I O N S C R E E N *
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
PERFORM get_local_file_name USING p_file.
*Start of Selection
START-OF-SELECTION.
Perform get_Excel_data.
perform validate_data.
Perform Process_Data.
end-of-selection
end-of-selection.
perform display_data.
*& Form get_local_file_name
text
-->P_P_FILE text
FORM get_local_file_name USING P_P_FILE.
CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
CHANGING
file_name = p_file.
ENDFORM. " get_local_file_name
*& Form get_Excel_data
text
--> p1 text
<-- p2 text
FORM get_Excel_data .
FIELD-SYMBOLS : <FS>.
DATA : V_INDEX TYPE I.
CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
EXPORTING
filename = p_file
i_begin_col = 1
i_begin_row = 2
i_begin_row = 1
i_end_col = 256
i_end_row = 9999 "65536
TABLES
intern = it_itab1
EXCEPTIONS
inconsistent_parameters = 1
upload_ole = 2
OTHERS = 3.
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
Message text-013 type 'E'.
ENDIF.
IF IT_ITAB1[] IS INITIAL.
Message text-001 type 'E'.
else. "IF IT_ITAB1[] IS INITIAL.
data: itab2 like itab occurs 0 with header line.
SORT IT_ITAB1 BY ROW COL.
LOOP AT IT_ITAB1.
MOVE :IT_ITAB1-COL TO V_INDEX.
ASSIGN COMPONENT V_INDEX OF STRUCTURE itab2 TO <FS>.
MOVE : IT_ITAB1-VALUE TO <FS>.
AT END OF ROW.
MOVE-CORRESPONDING itab2 TO itab.
APPEND itab.
CLEAR:itab,itab2.
ENDAT.
endloop.
describe table itab lines t_lin.
endif. "IF IT_ITAB1[] IS INITIAL.
ENDFORM. " get_Excel_data
*& Form Process_Data
text
--> p1 text
<-- p2 text
FORM Process_Data .
data:l_tabix type sy-tabix.
data:l_periv like t001-periv,
l_monat like bkpf-monat,
l_gjahr like bkpf-gjahr,
l_amt(21) type c.
data: l_ddate(10),
l_pdate(10).
WRITE p_docdat TO l_ddate.
WRITE p_postda TO l_pdate.
clear: l_periv,l_monat,l_gjahr.
select single periv from t001 into l_periv where bukrs = '5000'. "P_bukrs
if sy-subrc eq 0.
l_gjahr = p_postda+0(4).
call function 'FI_PERIOD_DETERMINE'
EXPORTING
i_budat = p_postda
i_bukrs = '5000' "p_bukrs
i_periv = l_periv
i_gjahr = l_gjahr
IMPORTING
e_monat = l_monat.
clear:l_periv.
endif.
loop at itab2.
refresh:bdcdata.
clear:bdcdata.
l_tabix = sy-tabix.
perform bdc_dynpro using 'SAPLK23F1' '1200'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
perform bdc_field using 'COHEADER-SEND_REC_REL'
'10SAP'.
perform bdc_field using 'RK23F-STATUS'
'S'.
perform bdc_field using 'COHEADER-BLDAT'
itab-docdate.
l_ddate.
perform bdc_field using 'COHEADER-BUDAT'
itab-postdate.
l_pdate.
perform bdc_field using 'COHEADER-PERIO'
l_monat. "'9'.
perform bdc_field using 'COHEADER-BLTXT'
itab-doctext.
p_doctxt.
perform bdc_field using 'RK23F-KSTAR'
itab2-costele.
WRITE itab2-amount TO l_amt.
l_amt = itab-amount.
condense l_amt no-gaps.
perform bdc_field using 'RK23F-WTGBTR'
l_amt.
itab-amount.
perform bdc_field using 'RK23F-WAERS'
'USD'.
*perform bdc_field using 'RK23F-SGTXT'
itab-doctext.
perform bdc_field using 'RK23F-SKOSTL'
itab2-scostctr.
perform bdc_field using 'BDC_CURSOR'
'RK23F-EAUFNR'.
perform bdc_field using 'RK23F-EKOSTL'
itab2-rcostctr.
perform bdc_field using 'RK23F-EAUFNR'
itab2-rintorder.
perform bdc_dynpro using 'SAPLK23F1' '1200'.
perform bdc_field using 'BDC_OKCODE'
'=POST'.
perform bdc_field using 'COHEADER-SEND_REC_REL'
'10SAP'.
perform bdc_field using 'RK23F-STATUS'
'S'.
perform bdc_field using 'COHEADER-BLDAT'
itab-docdate.
l_ddate.
perform bdc_field using 'COHEADER-BUDAT'
itab-postdate.
l_pdate.
perform bdc_field using 'COHEADER-PERIO'
'9'.
l_monat.
perform bdc_field using 'COHEADER-BLTXT'
itab-doctext.
p_doctxt.
perform bdc_field using 'BDC_CURSOR'
'RK23F-KSTAR'.
perform bdc_field using 'RK23F-WAERS'
'USD'.
CALL TRANSACTION 'KB15N' USING BDCDATA MODE DIS_MODE MESSAGES INTO MESSTAB.
If sy-subrc = 0.
s_rec = s_rec + 1.
ELSE.
u_rec = u_rec + 1.
move ITAB2-NUM to messtab1-msgv1.
concatenate itab2-costele ' | ' itab2-scostctr ' | ' itab2-rcostctr ' | ' itab2-rintorder into messtab1-msgv2.
condense messtab1-msgv2.
condense messtab1-msgv1.
append messtab1.
endif.
clear:itab2.
endloop.
ENDFORM. " Process_Data
BDC_DYNPRO *
FORM BDC_DYNPRO USING PROGRAM DYNPRO.
CLEAR BDCDATA.
BDCDATA-PROGRAM = PROGRAM.
BDCDATA-DYNPRO = DYNPRO.
BDCDATA-DYNBEGIN = 'X'.
APPEND BDCDATA.
ENDFORM. "BDC_DYNPRO
BDC_FIELD *
FORM BDC_FIELD USING FNAM FVAL.
IF FVAL <> ''. "NODATA.
CLEAR BDCDATA.
BDCDATA-FNAM = FNAM.
BDCDATA-FVAL = FVAL.
APPEND BDCDATA.
ENDIF.
ENDFORM. "BDC_FIELD
*& Form display_data
text
--> p1 text
<-- p2 text
FORM display_data .
skip 2.
write:/15 text-002.
skip 2.
write:/8 text-003.
SKIP.
write:/12 text-008,
25 P_DOCDAT.
SKIP.
write:/12 text-009,
25 P_POSTDA.
SKIP.
write:/12 text-012,
25 P_DOCTXT.
SKIP.
write:/12 text-004,
25 p_file.
skip 2.
write:/8 text-005,
60 t_lin.
skip.
write:/8 text-006,
60 s_rec.
skip.
write:/8 text-007,
60 u_rec.
skip.
write:/10 'row no',
20 'Information'.
skip.
loop at messtab1.
write:/10 messtab1-msgv1,
20 messtab1-msgv2.
clear:messtab1.
endloop.
ENDFORM. " display_data
*& Form validate_data
text
--> p1 text
<-- p2 text
FORM validate_data .
data: l_tabix1 type sy-tabix.
data: l_tabix2 type sy-tabix.
if not itab[] is initial.
select kostl from CSKS into table it_csks.
if sy-subrc eq 0.
sort it_csks by kostl.
endif.
select kstar from CSKA into table it_cska.
if sy-subrc eq 0.
sort it_cska by kstar.
endif.
loop at itab.
l_tabix1 = sy-tabix.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
INPUT = itab-scostctr
IMPORTING
OUTPUT = itab-scostctr .
read table it_csks into wa_csks with key kostl = itab-scostctr.
if sy-subrc ne 0.
u_rec = u_rec + 1.
L_TABIX2 = l_tabix1 + 1.
move l_tabix2 to messtab1-msgv1.
move l_tabix1 to messtab1-msgv1.
move itab-rintorder to messtab1-msgv2.
concatenate itab-costele ' | ' itab-scostctr ' | ' itab-rcostctr ' | ' itab-rintorder into messtab1-msgv2.
condense messtab1-msgv2.
condense messtab1-msgv1.
append messtab1.
clear:wa_csks.
CLEAR:L_TABIX2.
continue.
endif.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
INPUT = itab-rcostctr
IMPORTING
OUTPUT = itab-rcostctr .
read table it_csks into wa_csks with key kostl = itab-rcostctr.
if sy-subrc ne 0.
u_rec = u_rec + 1.
L_TABIX2 = l_tabix1 + 1.
move l_tabix2 to messtab1-msgv1.
move l_tabix1 to messtab1-msgv1.
concatenate itab-costele ' | ' itab-scostctr ' | ' itab-rcostctr ' | ' itab-rintorder into messtab1-msgv2.
condense messtab1-msgv2.
condense messtab1-msgv1.
append messtab1.
clear:wa_csks.
CLEAR:L_TABIX2.
continue.
endif.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
INPUT = itab-costele
IMPORTING
OUTPUT = itab-costele .
read table it_cska into wa_cska with key kstar = itab-costele.
if sy-subrc ne 0.
u_rec = u_rec + 1.
L_TABIX2 = l_tabix1 + 1.
move l_tabix2 to messtab1-msgv1.
move l_tabix1 to messtab1-msgv1.
concatenate itab-costele ' | ' itab-scostctr ' | ' itab-rcostctr ' | ' itab-rintorder into messtab1-msgv2.
condense messtab1-msgv2.
condense messtab1-msgv1.
append messtab1.
clear:wa_csks.
CLEAR:L_TABIX2.
continue.
endif.
move-corresponding itab to itab2.
MOVE l_tabix1 TO ITAB2-NUM.
append itab2.
clear: itab2.
clear:itab.
endloop.
else.
message 'No records in File' type 'S'.
endif.
ENDFORM. " validate_data -
Extract BW infocube data to a flat file
Hi Folks,
I have a requirement where I need to extract data from Infocubes to a flat file.
Please suggest me the ABAP programming methodology where i can use function module to extract data from cube and dump it to internal table.
Your immediate help is required.
Thanks,
Rahulyou can simply try a workaround for your problem
right click on cube -> display data-> menu bar->edit--download->save as spreadsheet (.xls) and extract the data
and if you want .txt only. try saving excel in .csv then you can open that via notepad and save it in .txt
you wil be saved frm writing abap codes
-laksh -
HT2486 how can I export the contact book data to a csv file for editing?
how can I export the contact book data to a csv file for editing?
You can edit a card right in Contacts. No need to export and re-import.
Select a name and on the right side, click the Edit button: -
How to save info in a meta-data of a jpg file?
hi, i need to know how to save info in a meta-data of a jpg file:
this is my code (doesn't work):
i get an exception,
javax.imageio.metadata.IIOInvalidTreeException: JPEGvariety and markerSequence nodes must be present
at com.sun.imageio.plugins.jpeg.JPEGMetadata.mergeNativeTree(JPEGMetadata.java:1088)
at com.sun.imageio.plugins.jpeg.JPEGMetadata.mergeTree(JPEGMetadata.java:1061)
at playaround.IIOMetaDataWriter.run(IIOMetaDataWriter.java:59)
at playaround.Main.main(Main.java:14)
package playaround;
import java.io.*;
import java.util.Iterator;
import java.util.Locale;
import javax.imageio.*;
import javax.imageio.metadata.IIOMetadata;
import javax.imageio.metadata.IIOMetadataNode;
import javax.imageio.plugins.jpeg.JPEGImageWriteParam;
import javax.imageio.stream.*;
import org.w3c.dom.*;
public class IIOMetaDataWriter {
public static void run(String[] args) throws IOException{
try {
File f = new File("C:/images.jpg");
ImageInputStream ios = ImageIO.createImageInputStream(f);
Iterator readers = ImageIO.getImageReaders(ios);
ImageReader reader = (ImageReader) readers.next();
reader.setInput(ImageIO.createImageInputStream(f));
ImageWriter writer = ImageIO.getImageWriter(reader);
writer.setOutput(ImageIO.createImageOutputStream(f));
JPEGImageWriteParam param = new JPEGImageWriteParam(Locale.getDefault());
IIOMetadata metaData = writer.getDefaultStreamMetadata(param);
String MetadataFormatName = metaData.getNativeMetadataFormatName();
IIOMetadataNode root = (IIOMetadataNode)metaData.getAsTree(MetadataFormatName);
IIOMetadataNode markerSequence = getChildNode(root, "markerSequence");
if (markerSequence == null) {
markerSequence = new IIOMetadataNode("JPEGvariety");
root.appendChild(markerSequence);
IIOMetadataNode jv = getChildNode(root, "JPEGvariety");
if (jv == null) {
jv = new IIOMetadataNode("JPEGvariety");
root.appendChild(jv);
IIOMetadataNode child = getChildNode(jv, "myNode");
if (child == null) {
child = new IIOMetadataNode("myNode");
jv.appendChild(child);
child.setAttribute("myAttName", "myAttValue");
metaData.mergeTree(MetadataFormatName, root);
catch (Throwable t){
t.printStackTrace();
protected static IIOMetadataNode getChildNode(Node n, String name) {
NodeList nodes = n.getChildNodes();
for (int i = 0; i < nodes.getLength(); i++) {
Node child = nodes.item(i);
if (name.equals(child.getNodeName())) {
return (IIOMetadataNode)child;
return null;
static void displayMetadata(Node node, int level) {
indent(level); // emit open tag
System.out.print("<" + node.getNodeName());
NamedNodeMap map = node.getAttributes();
if (map != null) { // print attribute values
int length = map.getLength();
for (int i = 0; i < length; i++) {
Node attr = map.item(i);
System.out.print(" " + attr.getNodeName() +
"=\"" + attr.getNodeValue() + "\"");
Node child = node.getFirstChild();
if (child != null) {
System.out.println(">"); // close current tag
while (child != null) { // emit child tags recursively
displayMetadata(child, level + 1);
child = child.getNextSibling();
indent(level); // emit close tag
System.out.println("</" + node.getNodeName() + ">");
} else {
System.out.println("/>");
static void indent(int level) {
for (int i = 0; i < level; i++) {
System.out.print(" ");
}Hi,
Yes, you need store data to table, and fetch it when page is opened.
Simple way is create table with few columns and e.g. with CLOB column and then create form based on that table.
Then modify item types as you like, e.g. use HTML editor for CLOB column
Regards,
Jari -
How can I make a query by date with several TDMS files?
Hi,
I have a project that can write and read TDMS files everyday (a file for each day with date and time). There, I can import those files to excel (I choose a file and load it). But, I have a question: How can I make a query by date with those TDMS files? I'd like make a time stamp for start date and stop date, where I could compare the TDMS file dates and sum the values that are in the channels where these files are similar. For example, I save a file with "02/01/2013" name, in other day "03/01/2013", in other day "04/01/2013"... so, i'd like put in time stamp for start date "02/01/2013" file and to stop date "04/01/2013" file, than, sum all values that range with my TDMS files existing. How can I make that?
Thanks all,
ValHello Val_Auto.
You're Brazilian, no? Me too. = ^ - ^ =
I converted VI to version of your LabVIEW (8.5). Is attached in this reply.
This VI search all your TDMS in a range of dates and join them in a single TDMS. I hope this is what you wanted.
Query TDMS is the main VI. The TDMS VI Search changes the date format that out from calendar control (which is DD / MM / YYYY) to DD-MM-YYYY. This is because you can't name files using "/". I chose "-" but, if necessary, you should change to keep the same format of your TDMS files.
If you have any doubt as to its operation or how to make changes to adapt VI for your application, keep at your disposal.
Thank you for your contact; I hope have helped you and succeed in your application.
Wesley Rocha
Application Engineer
National Instruments Brazil
Visite a nossa comunidade em PORTUGUÊS!!!
Attachments:
Query TDMS.vi 62 KB
tdms search.vi 24 KB -
Step by Step details on how to load data from a flat file
hi can anyone explain how to load data from a flat file. Pls giv me step by step details. thnx
hi sonam.
it is very easy to load data from flat file. whn compared with other extrations methods...
here r the step to load transation data from a flat file.......
step:1 create a Flat File.
step:2 log on to sap bw (t.code : rsa1 or rsa13).
and observe the flat file source system icon. i.e pc icon.
step:3 create required info objects.
3.1: create infoarea
(infoObjects Under Modeling > infoObjects (root node)-> context menu -
> create infoarea).
3.2: create char /keyfig infoObject Catalog.(select infoArea ---.context menu --->create infoObject catalog).
3.3: create char.. infoObj and keyFig infoObjects accourding to ur requirement and activate them.
step:4 create infoSource for transaction data and create transfer structure and maintain communication structure...
4.1: first create a application component.(select InfoSources Under modeling-->infosources<root node>>context menu-->create applic...component)
4.2: create infoSource for transation data(select appl..comp--.context menu-->create infosource)
>select O flexible update and give info source name..
>continue..
4.4: *IMp* ASSIGN DATASOURCE..
(EXPAND APPLIC ..COMP..>EXPAND YOUR INFOSOURCE>CONTEXT MENU>ASSIGN DATASOURCE.)
>* DATASOURCE *
>O SOURCE SYSTEM: <BROWSE AND CHOOSE YOUR FLAT FILE SOURCE SYSTEM>.(EX:PC ICON).
>CONTINUE.
4.5: DEFINE DATASOURCE/TRANSFER STRUCTURE FOR IN FOSOURCE..
> SELECT TRANSFER STRUCTURE TAB.
>FILL THE INFOOBJECT FILLED WITH THE NECESSARY INFOOBJ IN THE ORDER OR SEQUENCE OF FLAT FILE STRUCTURE.
4.6: ASSIGN TRANSFER RULES.
---> NOW SELECT TRANSFER RULES TAB. AND SELECT PROPOSE TRANSFER RULES SPINDLE LIKE ICON.
(IF DATA TARGET IS ODS -
INCLUDE 0RECORDMODE IN COMMUNICATION STRUCTURE.)
--->ACTIVATE...
STEP:5 CREATE DATATARGET.(INFOCUBE/ODS OBJECT).
5.1: CREATE INFO CUBE.
-->SELECT YOUR INFOAREA>CONTEXT MENU>CREATE INFOCUBE.
5.2: CREATE INFOCUBE STRUCTURE.
---> FILL THE STRUCTURE PANE WILL REQUIRE INFOOBJECTS...(SELECT INFOSOURCE ICON>FIND UR INFOSOURCE >DOUBLE CLICK > SELECT "YES" FOR INFOOBJECT ASSIGNMENT ).
>MAINTAIN ATLEAST ON TIME CHAR.......(EX; 0CALDAY).
5.3:DEFINE AND ASSIGN DIMENSIONS FOR YOUR CHARACTERISTICS..
>ACTIVATE..
STEP:6 CREATE UPDATE RULES FOR INFOCUDE USING INFOSOURCE .
>SELECT UR INFOCUBE >CONTEXT MENU> CREATE UPDATE RULES.
> DATASOURCE
> O INFOSOURCE : _________(U R INFOSOURCE). AND PRESS ENTER KEY.......
>ACTIVATE.....UR UPDATE RULES....
>>>>SEE THE DATA FLOW <<<<<<<<----
STEP:7 SCHEDULE / LOAD DATA..
7.1 CREATE INFOPACKAGE.
--->SELECT INFOSOURCE UNDER MODELING> EXPAND UR APPLIC.. COMP..> EXPAND UR INFOSOURCE..> SELECT UR DATASOURCE ASSIGN MENT ICON....>CONTEXT MENU> CREAE INFOPACKAGE..
>GIVE INFOPACKAGE DISCREPTION............_________
>SELECT YOUR DATA SOURCE.-------> AND PRESS CONTINUE .....
>SELECT EXTERNAL DATA TAB...
> SELECT *CLIENT WORKSTATION oR APPLI SERVER >GIVE FILE NAME > FILE TYPE> DATA SAPARATER>
>SELECT PROCESSING TAB
> PSA AND THEN INTO DATATARGETS....
>DATATARGET TAB.
>O SELECT DATA TARGETS
[ ] UPDATE DATATARGET CHECK BOX.....
--->UPDATE TAB.
O FULL UPDATE...
>SCHEDULE TAB..
>SELECT O START DATA LOAD IMMEDIATELY...
AND SELECT "START" BUTTON........
>>>>>>>>>>
STEP:8 MONITOR DATA
> CHECK DATA IN PSA
CHECK DATA IN DATA TARGETS.....
>>>>>>>>>>> <<<<<<<<<----
I HOPE THIS LL HELP YOU..... -
How to load data from a flat file which is there in the application server
HI All,
how to load data from a flat file which is there in the application server..Hi,
Firstly you will need to place the file(s) in the AL11 path. Then in your infopackage in "Extraction" tab you need to select "Application Server" option. Then you need to specify the path as well as the exact file you want to load by using the browsing button.
If your file name keeps changing on a daily basis i.e. name_ddmmyyyy.csv, then in the Extraction tab you have the option to write an ABAP routine that generates the file name. Here you will need to append sy-datum to "name" and then append ".csv" to generate complete filename.
Please let me know if this is helpful or if you need any more inputs.
Thanks & Regards,
Nishant Tatkar. -
How do I get the last changed date & time of a file in app server?
Hi Experts,
How do I get the last changed date & time of a file in app server?
Thanks!
- Anthony -Hi,
that's what I use...
CALL 'C_DIR_READ_FINISH'. " just to be sure
CALL 'C_DIR_READ_START' ID 'DIR' FIELD p_path.
IF sy-subrc <> 0.
EXIT. "Error
ENDIF.
DO.
CLEAR l_srvfil.
CALL 'C_DIR_READ_NEXT'
ID 'TYPE' FIELD l_srvfil-type
ID 'NAME' FIELD l_srvfil-file
ID 'LEN' FIELD l_srvfil-size
ID 'OWNER' FIELD l_srvfil-owner
ID 'MTIME' FIELD l_mtime
ID 'MODE' FIELD l_srvfil-attri.
* l_srvfil-dir = p_path .
IF sy-subrc = 1.
EXIT.
ENDIF." sy-subrc <> 0.
PERFORM p_to_date_time_tz
USING l_mtime
CHANGING l_srvfil-mod_time
l_srvfil-mod_date.
TRANSLATE l_srvfil-type TO UPPER CASE. "#EC TRANSLANG
PERFORM translate_attribute CHANGING l_srvfil-attri.
CHECK:
NOT l_srvfil-file = '.',
l_srvfil-type = 'D' OR
l_srvfil-type = 'F' .
APPEND l_srvfil TO lt_srvfil.
ENDDO.
CHECK NOT lt_srvfil IS INITIAL.
pt_srvfil = lt_srvfil.
FORM p_to_date_time_tz USING p_ptime TYPE p
CHANGING p_time TYPE syuzeit
p_date TYPE sydatum.
DATA:
l_abaptstamp TYPE timestamp,
l_time TYPE int4,
l_opcode TYPE x VALUE 3,
l_abstamp TYPE abstamp.
l_time = p_ptime.
CALL 'RstrDateConv'
ID 'OPCODE' FIELD l_opcode
ID 'TIMESTAMP' FIELD l_time
ID 'ABAPSTAMP' FIELD l_abstamp.
l_abaptstamp = l_abstamp.
CONVERT TIME STAMP l_abaptstamp TIME ZONE sy-zonlo INTO DATE p_date
TIME p_time.
ENDFORM. " p_to_date_time_tz
Regards,
Clemens -
Uploading Data from a Flat File
Hi
I am trying to Upload data from a Flat File to the MDS. I have a few questions on the Process.
a) XI would be the Interface, and one end would be a file adapter with the Flat File format. On the other end, which Interface should I use - ABA Business Partner In or MDM Business Partner In. I do not understand the differences between them and where should which one be used?
B) At the moment,I want to map the data to standard Object type, Business Partner BUS1006. This also has a staging area already defined in the System. Can I view the data which is imported into the Staging area ? How so ?
C) The struture (or data) that I wish to upload has few fields, and does not map to the BP structure easily. In such a scenario does it make sense to
Create a new Object type
(ii) Create a new Business Partner type with the appropriate fields only ..
What I need to know is if either of these options is feasible and what are the pros & cons of doing this, in terms of effort, skillset & interaction with SAP Development ?
Kindly do reply if you have any answers to these questions.
Regards,
GauravHi Markus,
Thanks for your inputs.
I have tried uploading some dats from a Flat file to the Business Partner Onject type, BUS1006. I initially got some ABAP Parsing errors on the MDM side, but after correcting that, I find my message triggers a short dump - with the Method SET_OBJECTKEYS, not finding any keys in the BP structure that has been created.
Q1 - How do I get around this problem ? Is it necessary for me to specify Keys in the PartyID node of the Interface ABABusinessPartnerIn. or is it something else ?
Q2 - How is this general process supposed to work? I would assume that for staging, I would get incomplete Master data or data from flat files, which need not neccesarily contain keys. The aim is to use the matching strategies in the CI to identify duplicates and consolidate them.
Thanks in advance for your reply.
Regards,
Gaurav -
BAPI to upload data from a flat file to VA01
Hi guys,
I have a requirement wherein i need to upload data from a flat file to VA01.Please tell me how do i go about this.
Thanks and regards,
Frank.Hi
previously i posted code also
Include YCL_CREATE_SALES_DOCU *
Form salesdocu
This Subroutine is used to create Sales Order
-->P_HEADER Document Header Data
-->P_HEADERX Checkbox for Header Data
-->P_ITEM Item Data
-->P_ITEMX Item Data Checkboxes
-->P_LT_SCHEDULES_IN Schedule Line Data
-->P_LT_SCHEDULES_INX Checkbox Schedule Line Data
-->P_PARTNER text Document Partner
<--P_w_vbeln text Sales Document Number
DATA:
lfs_return like line of t_return.
FORM create_sales_document changing P_HEADER like fs_header
P_HEADERX like fs_headerx
Pt_ITEM like t_item[]
Pt_ITEMX like t_itemx[]
P_LT_SCHEDULES_IN like t_schedules_in[]
P_LT_SCHEDULES_INX like t_schedules_inx[]
Pt_PARTNER like t_partner[]
P_w_vbeln like w_vbeln.
This Perform is used to fill required data for Sales order creation
perform sales_fill_data changing p_header
p_headerx
pt_item
pt_itemx
p_lt_schedules_in
p_lt_schedules_inx
pt_partner.
Function Module to Create Sales and Distribution Document
perform sales_order_creation using p_header
p_headerx
pt_item
pt_itemx
p_lt_schedules_in
p_lt_schedules_inx
pt_partner.
perform return_check using p_w_vbeln .
ENDFORM. " salesdocu
Form commit_work
To execute external commit *
FORM commit_work .
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
EXPORTING
WAIT = c_x
ENDFORM. " Commit_work
Include ycl_sales_order_header " To Fill Header data and Item data
Include ycl_sales_order_header.
Form return_check
To validate the sales order creation
FORM return_check using pr_vbeln type vbeln.
if pr_vbeln is initial.
LOOP AT t_return into lfs_return .
WRITE / lfs_return-message.
clear lfs_return.
ENDLOOP. " Loop at return
else.
perform commit_work. " External Commit
Refresh t_return.
fs_disp-text = text-003.
fs_disp-number = pr_vbeln.
append fs_disp to it_disp.
if p_del eq c_x or p_torder eq c_x or
p_pgi eq c_x or p_bill eq c_x.
perform delivery_creation. " Delivery order creation
endif. " If p_del eq 'X'......
endif. " If p_w_vbeln is initial
ENDFORM. " Return_check
*& Form sales_order_creation
text
-->P_P_HEADER text
-->P_P_HEADERX text
-->P_PT_ITEM text
-->P_PT_ITEMX text
-->P_P_LT_SCHEDULES_IN text
-->P_P_LT_SCHEDULES_INX text
-->P_PT_PARTNER text
FORM sales_order_creation USING P_P_HEADER like fs_header
P_P_HEADERX like fs_headerx
P_PT_ITEM like t_item[]
P_PT_ITEMX like t_itemx[]
P_P_LT_SCHEDULES_IN like t_schedules_in[]
P_P_LT_SCHEDULES_INX like t_schedules_inx[]
P_PT_PARTNER like t_partner[].
CALL FUNCTION 'BAPI_SALESDOCU_CREATEFROMDATA1'
EXPORTING
sales_header_in = p_p_header
sales_header_inx = p_p_headerx
IMPORTING
salesdocument_ex = w_vbeln
TABLES
return = t_return
sales_items_in = p_pt_item
sales_items_inx = p_pt_itemx
sales_schedules_in = p_p_lt_schedules_in
sales_schedules_inx = p_p_lt_schedules_inx
sales_partners = p_pt_partner.
ENDFORM. " sales_order_creation
plzz reward if i am usefull plzz -
Data loading from flat file to cube using bw3.5
Hi Experts,
Kindly give me the detailed steps with screens about Data loading from flat file to cube using bw3.5
...............PleaseHi ,
Procedure
You are in the Data Warehousing Workbench in the DataSource tree.
1. Select the application components in which you want to create the DataSource and choose Create DataSource.
2. On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
The DataSource maintenance screen appears.
3. Go to the General tab page.
a. Enter descriptions for the DataSource (short, medium, long).
b. As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
c. Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
4. Go to the Extraction tab page.
a. Define the delta process for the DataSource.
b. Specify whether you want the DataSource to support direct access to data.
c. Real-time data acquisition is not supported for data transfer from files.
d. Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
Choose Properties if you want to display the general adapter properties.
e. Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
f. Depending on the adapter and the file to be loaded, make further settings.
■ For binary files:
Specify the character record settings for the data that you want to transfer.
■ Text-type files:
Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
Specify the character record settings for the data that you want to transfer.
For ASCII files:
If you are loading data from an ASCII file, the data is requested with a fixed data record length.
For CSV files:
If you are loading data from an Excel CSV file, specify the data separator and the escape character.
Specify the separator that your file uses to divide the fields in the Data Separator field.
If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
g. Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
h. Make the settings for currency conversion, as required.
i. Make any further settings that are dependent on your selection, as required.
5. Go to the Proposal tab page.
This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
a. Specify the number of data records that you want to load and choose Upload Sample Data.
The data is displayed in the upper area of the tab page in the format of your file.
The system displays the proposal for the field list in the lower area of the tab page.
b. In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
6. Go to the Fields tab page.
Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
a. To define a field, choose Insert Row and specify a field name.
b. Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
c. Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
d. Change the data type of the field if required.
e. Specify the key fields of the DataSource.
These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
f. Specify whether lowercase is supported.
g. Specify whether the source provides the data in the internal or external format.
h. If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
i. If required, specify a conversion routine that converts data from an external format into an internal format.
j. Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
k. Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
l. Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
7. Check, save and activate the DataSource.
8. Go to the Preview tab page.
If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
This function allows you to check whether the data formats and data are correct.
For More Info: http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm -
Hi all,
I need to export a table data to a flat file.
But the problem is that the data is huge about 200million rows occupying around 60GB of space
If I use SQL*Loader in Toad, it is taking huge time to export
After few months, I need to import the same data again into the table
So please help me which is the efficient and less time taking method to do this
I am very new to this field.
Can some one help me with this?
My oracle database version is 10.2
Thanks in advanceOK so first of all I would ask the following questions:
1. Why must you export the data and then re-import it a few months later?
2. Have you read through the documentation for SQLLDR thoroughly ? I know it is like stereo instructions but it has valuable information that will help you.
3. Does the table the data is being re-imported into have anything attached to it eg: triggers, indices or anything that the DB must do on each record? If so then re-read the the sqlldr documentation as you can turn all of that off during the import and re-index, etc. at your leisure.
I ask these questions because:
1. I would find a way for whatever happens to this data to be accomplished while it was in the DB.
2. Pumping data over the wire is going to be slow when you are talking about that kind of volume.
3. If you insist that the data must be dumped, massaged, and re-imported do it on the DB server. Disk IO is an order of magnitude faster then over the wire transfer. -
Uploading the data from a flat file into ztable
Hi,
I have a requirement where I have to upload the data from 2 flat files into 2 z tables(ZRB_HDR,ZRB_ITM).From the 1st flat file only data for few fields have to be uploaded into ztable(ZRB_HRD) .Fromthe 2nd flat file data for all the fields have to me uploaded into ztable(ZRB_ITM). How can I do this?
Regards,
Hemahi,
declare two internal table with structur of your tables.
your flat files should be .txt files.
now make use of GUI_UPLOAD function module to upload your flatfile into internal tables.
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = 'c:\file1.txt'
has_field_separator = 'X'
TABLES
data_tab = itab1
EXCEPTIONS
OTHERS = 1.
use this function twice for two tables.
then loop them individually and make use of insert command. -
Unload CLOB data to a flat file
Hi,
does anybody knows how to unload CLOB data to a flat file. Can u pl give a example. Is there any ulity in Oracle to unload data to a flat file. Whether we have to use the concatation operator for unloading the data to a flat file.
help??????
regards,
gopuI don't know if there's an easy way, but you might want to try using the packages UTL_FILE and DBMS_LOB.
Hope that helps.
Maybe you are looking for
-
has anyone figure out how to send more than one picture at a time to a single address? If so please tell how.
-
I may have accidentally hidden the navigation bar i.e. the one that gives you options like File, Edit, etc. the info you provide for getting it back does not help because it asks me to use those tabs, which are not there anymore. This prevents me fro
-
Consistent PageServerChildCrash
Hello SAP, We have a reporting dashboard in place that serves up reports to users on an on-demand viewing basis. Reports are built and tested in Crystal Reports 2011 and uploaded to Crystal Server 2011 (BI 4.x platform). The dashboard (built with the
-
Z 10 hotspot connection problems
I am with Rogers and have a Z10 with OS version10.2.1.2141 Didn't know about the Tethering or Mobile Hotspot until a fiend told me and we both tried to start them up and for some reason both services presently say "Cannot start Mobile Hotspot due to
-
What are the fft based algorithms DIAdem is using?
Greetings! I am relatively new to DIAdem and am trying to find some basic information about how DIAdem is performing some calculations, specifically the FFT based functions. The DIAdem help does not cover in depth what I would like to know and I ca