Writing n records to unix in n files
i need to write a file to unix my requirement is if i have more than 65000 records in my final internal table
<fs_table> then i have to divide the records in multiples of 65000 and write them to unix.
in sperate files.i have wrirren this code for 65000 records .
DESCRIBE TABLE fs_table LINES w_line.
IF W_LINE < 65000.
Create Unix file
OPEN DATASET p_unix FOR OUTPUT IN TEXT MODE MESSAGE v_message.
IF NOT sy-subrc IS INITIAL.
MESSAGE e398(00) WITH text-008 p_unix v_message space.
ENDIF.
** Write header to unix file
LOOP AT gt_fieldcat1 INTO st_fieldcat1.
CONCATENATE st_rec st_fieldcat1-seltext_l
INTO st_rec
SEPARATED BY '|'.
ENDLOOP.
CONDENSE st_rec.
TRANSFER st_rec TO p_unix.
** Write data to unix file
LOOP AT <fs_table> ASSIGNING <fs_struc>.
CLEAR: st_rec.
DO.
ASSIGN COMPONENT sy-index OF STRUCTURE <fs_struc> TO
<field>.
IF sy-subrc NE 0.
EXIT.
ENDIF.
v_field = <field>.
IF v_field IS INITIAL.
CONCATENATE st_rec v_field ' |'
INTO st_rec .
ELSE.
CONCATENATE st_rec v_field '|'
INTO st_rec.
ENDIF.
ENDDO.
TRANSFER st_rec TO p_unix.
ENDLOOP.
Close unix file
CLOSE DATASET p_unix.
else??
Hi Reva,
Hope this logic works for your requirement
*_ Initialize variable
lv_new_file = 'X'.
lv_new_start = 1.
clear: lv_new_file_surfix.
*_ Loop 1: to create multiple file with max 65,000 records per file
while lv_new_file = 'X'.
*_ Identify unique file naming convention
add 1 to lv_new_file_surfix.
concatenate '..\new_file' lv_new_file_surfix into p_unix.
*_ Open connection and write header information
open dataset using p_unix.
transfer header..
*_ Write data
clear lv_counter.
*_ Loop 2: to pass data from <fs_itab> into file
loop at <fs_itab> from lv_new_start.
*_ Validate if the record has already hit 65,000 exit the loop 2 and create a new file
add 1 to lv_counter.
if lv_counter > 65000.
lv_new_start = sy-tabix.
exit.
endif.
move data to unix.
transfer data to unix.
*_ End the looping process of 1 and 2
at end.
clear lv_new_file.
endat.
endloop.
*_ close connection
close dataset.
endwhile.
Similar Messages
-
Hi Friends,
OS = Windows XP 3
Database = Oracle 11g R2 32 bit
Processor= intel p4 2.86 Ghz
Ram = 2 gb
Virtual memory = 4gb
I was able to install the oracle 11g successfully, but during installation at the time of database creation I got the following error many times and I ignored it many times... but at 55% finally My installation was hanged nothing was happening after it.....
ORA-28056: Writing audit records to Windows Event Log failed Error and at 55% my Installation got hung,,,, I end the installation and tried to create the database afterward by DBCA but same thing happened....
Please some one help me out, as i need to install on the same machine .....
Thanks and RegardsAAP wrote:
Thanks Now I am able to Create a database , but with one error,
When I created a database using DBCA, at the last stage I got this error,
Database Configuration Assistant : Warning
Enterprise Manager Configuration Failed due to the Following error Listener is not up or database service is not registered with it. Start the listener & Registered database service & run EM Configuration Assistant again....
But when I checked the listener was up.....
Now what was the problem, I am able to connect and work through sqlplus,
But I didnt got the link of EM and when try to create a new connection in sql developer it is giving error ( Status : failure - Test Failed the Network Adapter could not establish the connection )
Thanks & Regards
Creation of the dbcontrol requires a connection via the listener. When configuring the dbcontrol as part of database creation, it appears that the dbcontrol creation step runs before the dynamic registration of the databsase with the listener is complete. Now that the database itself is completed and enough time (really, just a minute or two) has passed to allow the instance to register, use dbca or emca to create the dbcontrol.
Are you able to get a sqlplus connection via the listener (sqlplus scott/tiger@orcl)? That needs to be the first order of business. -
Dump data from server (unix) into flat file on client (PC) on regular schedule
Hi folks,
I wish to dump the contents of a table (Oracle on Unix) into a file residing in a local directory (PC client). I know how to use utl_file but, init.ora resides on unix, therefore it will create file on unix directory and not the client machine.
Any ideas?Hi Bro
if u wan to write a file on Server use this code..
procedure write_file
filename varchar2,
mesg varchar2
is
hfile_handle utl_file.file_type;
begin
begin
file_handle := utl_file.fopen( '<FILE_PATH_HERE>', filename, 'A');
exception
when utl_file.invalid_path then
dbms_output.put_line('invalid path dummy');
when utl_file.invalid_operation then
file_handle := utl_file.fopen( '<FILE_PATH_HERE>', filename, 'W');
when others then
raise;
end;
utl_file.put( hfile_handle , mesg);
utl_file.fclose( hfile_handle );
exception
when utl_file.invalid_path then
dbms_output.put_line('invalid path dummy');
when utl_file.invalid_operation then
dbms_output.put_line('invalid opertaion dummy');
when others then
raise;
end write_file;
==================================================
and If you want it in the client you can use the one given below.
declare
file_handle text_io.file_type;
L_file_name := 'C:\Folder1\Filename.txt';
begin
file_handle := text_io.fopen(L_file_name,'w'); (mode should be 'a' instead of 'w' if u want to append)
text_io.put_line(file_handle,l_data) ;
text_io.fclose(file_handle) ;
end;
4 any help plz write me
Khurram. -
Extract records to Desktop in flat file
How to Extract records to Desktop in flat file.
I am not able to do it from Syndicator,i tried many options.Some one can help.
I want to extract the records from MDM to Desktop in Excel/Flat file format,whether it is from Data Manage or Syndicator.
Immediate req.Hi Shifali,
I want to extract the records from MDM to Desktop in Excel/Flat file format,whether it is from Data Manage or Syndicator
- MDM syndicator is the tool to export data in MDM .
- Data in MDM repository can be viewed and managed in MDM Data manager.
- What ever data is present in data manager will be exported outside using MDM syndicator.using your search criteria.
How to Extract records to Desktop in flat file.
I am not able to do it from Syndicator,i tried many options.Some one can help.
- Master data in MDM rep can be syndicated using MDM syndicator using 2 main formats Flat and XML.
- For local syndication to your desktop in flat format you need to follow the bewlo steps:
1) Create destination items manually in Syndicator under destination item tab.
2) Map this destination item to the source fields whcih are your mdm rep fields.
3) Use a serach critera if you wnat to filter records.
4) Click the export button in the syndicator
5) It will ask for the output file name and format
6) Give a name and select teh format form dropdown in the window (it can be text,excel.access any thing)
7) You output file is syndiacted to your local desktop.
Follow the below link for Flat syndication:
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/60ebecdf-4ebe-2a10-cf9f-830906c73866 (Flat Syndication)
Follow the below link for XML syndication:
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e04fbbb3-6fad-2a10-3699-fbb40e51ad79 (XML Syndication)
This will give you the clear distinction between the two.
Hope It Helped
Thanks & Regards
Simona Pinto -
Getting list of files from Unix directory inclding files frm sub-directries
Hi All,
I am trying to use Fm 'SUBST_GET_FILE_LIST' and
'RZL_READ_DIR_LOCAL' for getting a list of all files in a Unix directory including files from sub-directories.
In the first case I am getting an Exception called 'Access Error'
and when I use the 2nd FM, I am not getting any output.
Is there a special way to use these FMs.
Kindly help.
Regds,
Shweta[url http://java.sun.com/developer/JDCTechTips/2003/tt0122.html#1]READING FILES FROM JAVA ARCHIVES (JARS)
-
Will the informations be recorded in the alert.log file? -----No.168
will the informations about the loss of a temporary file be recorded in the alert.log file?
Yes, because whe your database starts needs to "mount" a tablespace with temporary files (case of tablespace "TEMP"). But don't worry with a loss of this tablespace because doesn't contain nothing when database starts.
-
Convert the Database records to a standard XML file format?
Hai,
i want to convert the Database records to a standard XML file
format which includes the schema name, table name, field name
and field value. i am using Oracle 8.1.7. Is there any option
please help me
Thanks in advance.You could put the files somewhere and I can export them as QuickTime. Or you could find anyone you know who has Director. Or do the 30 day trial download
Another approach would be to play the DCR in a browser window, and do a screen recording. But that’s unlikely to give the perfect frame rate that you would get by exporting from Director. -
How to hard code a records at the end of file
Hello experts,
I am doing IDOC to File scenario,I need to append 4 records at the end of file each time it is created in XI(These four records are need to be hardcode in the o/p file) .
For example with IDOC to File,the o/p file like
1 name number sal
2 name number sal
3 name number sal
But my expected file is
1 name number sal
2 name number sal
3 name number sal
1 abc 123 2345676
2 bdc 234 11111111
3 aaa 123 11111111
4 fffff 567 33333333
Every time these 4 records are common.How can i hard code the last 4 records in XI.
could u plz help me in this.Thanks in advance.
Regards,
KPHi,
Hope u are doing File Content Conversion in ur receiver adapter.
Lets assume that u have structure like below
-Record
-RecordSet
X
Y
Z
Add one more Recordset like for Trailer records
-Record
-RecordSet
X
Y
Z
-TrailerRecordSet
Trailer
In ur mapping hardcode the Trailer node with the four lines (u can use '\n' char to represent nextlines)
In ur Receiver communication channel add the TrailerRecordSet also...
RecordSetStructure: RecordSet,TrailerRecordSet
TrailerRecordSet.fieldSeparator -
'0'
TrailerRecordSet.endSeparator -
'0'
RecordSet Parameters....
Regards,
Sudharshan -
Updated record should come in txt file
Hi Friends,
My requirements like this way, any changes make in mara, mard, mbew, makt, vbak, vbap, vbrk and abrp table. that newly created data should come in .txt file of application server.
I have already developed a program for that. it is downloading data in every 3 hours slots. it is running in background. whatever changes made during these hours it will download.
now, my requirement has been changed, instance data should come in .txt file of app server. e.g. when newly created record save in database table, same time that record should come in .txt file with proper format.
is it possible? please let me know.
Thanks in advance,
ParagHi Parag,
To obtain changes you know you can get the details from the tables CDHDR and CDPOS.
Also you have questions about performance and so. SO here are some details.
- When you flag a data element for change document (is checked) it is ONLY a marker that allows for registration of this field's changes into CDHDR and CDPOS. The actual control is done on datafile level in its technical settings (Transaction SE11 with datafile name and then push button "Technical Settings" or CtrlShiftF9). Herein you will find a flag "Log data changes".
Within the CDHDR file and CDPOS file a field OBJECTCLAS is used. Only for existing OBJECTCLAS values the changes are logged.
- Now obvious this is the trick for standard SAP (as Subramanian has already pointed out you can find "OBJECTCLAS" values with transaction SCDO). If you want to know on how to create your own "OBJECTCLAS" values with working logging on your own designed fields follow Subramanian suggestion and read the documentation.
Now to your questions:
You gave some tables you need to track changes (and now also for initial creation) like MARA, MARD, MAKT and others.
To get changes for these tables use the following "OBJECTCLAS" values:
- MATERIAL (Tables MARA, MARC, MARD, MBEW, MFHM, MLGN, MLGT, MPGD, MPOP and MVKE). By-the-way, this object will be replaced by MATERIAL_N (available from release 4.6x).
- VERKBELEG (Tables VBAK, VBAP, VBEP, VBKD, VBLB, VBPA, VBPA2 and VBUK).
To collect changes (suggested by Andreas) you could use function module CHANGEDOCUMENT_READ. This is very usefull if also archiving is active for the objects you need to track changes for and your changes are scattered through time, but for your problem it is better to approach the log data directly.
1. First select the main change documents from CDHDR table for a given "OBJECTCLAS" and "OBJECTID". Here you can use additional filtering on DATE (field UDATE) and TIME (field (UTIME). Even filtering on a specific transaction is possible (field TCODE).
This gives you a number of change documents (field CHANGENR).
2a. Secondly select the specific field changes from table CDPOS by using the found fields from CDHDR and additionally fill TABNAME with the specific table and if required FNAME with the specific field name. 2b. Since in your case the values will not be known, you need to track changes, you have to be very carefull in your selections. If you track the object MATERIAL or MATERIAL_N, you best loop over the MARA table and for each MATNR fill the OBJECTID field of CDHDR with this MATNR value.
3. In order to find NEWLY created items you need to check the CHANGE_IND flag. When 'I' it is an new insert, when 'U' it is an update. Now this rule applies ONLY to key fields, since SAP first creates the key record (CHANGE_IND = 'I') and then the other fields (CHANGE_IND = 'U').
Finally the warning given by Andreas (runtime increases - you MUST select with OBJECTCLAS and OBJECTID) is very important. Not supplying OBJECTID will have a very heavy impact on the runtime.
Hope this gives you some clues on how to approach your problem.
Regards,
Rob. -
Does IXOS record the name of the file uploaded in a R3 table?
Does IXOS record the name of the file uploaded in a R3 table?
Hi Christiana,
When you perform a file upload to IXOS, the filename is not being record. The archivelink table (toa*) will keep track the document type of the file only.
Hope this help.
thanks,
Jess -
Block size in tt for writing data to transaction log and checkpoint files
Hello,
what block size is TimesTen using when its writing data to transaction log and checkpoint files? Does it use some fixed block size during filesystem writes?Although in theory logging can write 2 KB blocks in almost all circumstances it will write 4 KB or larger so yes a filesystem with a 4 KB block size is fine for both checkpointing and logging.
Chris -
Makepkg: Record PKGBUILD sources in .PKGINFO files for maintenance
To me, it seems like a good idea to record all the source files listed in PKGBUILD's to their accompanying .PKGINFO files; then, should the user have configured the $SRCDEST variable in his /etc/makepkg.conf, this would allow old source files to be (optionally) automatically removed when `pacman -Sc' is run, the package is upgraded, or uninstalled. Such a feature would save (me) a lot of time and effort manually sorting through and cleaning them.
Do you "concur"? ( )
Of course, for this feature to be really useful, you would have to compile from source often.This script will parse all the PKGBUILD's in your $ABSROOT directory, collect all their sources into an array, and interactively remove each package NOT in the array from your $SRCDEST, although recording each pkg's source files into its .PKGINFO file would be SO much easier, faster, simpler, and safer (as no unwanted code -- such as little tid bits outside the build() method of PKGBUILD's -- is executed while the sources are being collected).
#!/bin/bash
. /etc/makepkg.conf
. /etc/abs.conf
if [[ "${SRCDEST}" != "" && "${ABSROOT}" != "" && -d "${SRCDEST}" && -d "${ABSROOT}" ]]; then
# Holds the column width of the current terminal window
COLS=$(tput cols)
# Create an empty row of the width of the current terminal window
#+ which will be used to erase the current row.
for sp in $(seq $COLS); do
empty_row="${empty_row} "
done
# Array to hold the sources
sources=()
# Array to hold the files to remove
remove_files=()
echo "Collecting sources..."
for PKGBUILD in $(find "${ABSROOT}" -type f -name PKGBUILD); do
echo -ne "${empty_row}\r${PKGBUILD:0:$COLS}\r"
. "${PKGBUILD}" &> /dev/null # Silence is golden
sources=(${sources[@]} ${source[@]##*/})
done
# Sort and prune the files
sources=($(for src_file in ${sources[@]}; do echo "${src_file}"; done | sort | uniq))
echo -e "${empty_row}\rExamining ${SRCDEST}..."
for src_file in $(find "${SRCDEST}" -type f | sort); do
# Show the status
echo -ne "${empty_row}\r${src_file:0:$COLS}\r"
# Copy the basename of the current source file for comparisons
current=${src_file##*/}
i=0
j=${#sources[@]}
k=$(( (i + j) / 2 ))
# Perform a binary search for the current file
for (( c = 0; c < ${#sources[@]}; c++ )); do
let "k = (i + j) / 2"
if [[ "${sources[k]}" < "${current}" ]]; then
let "i = k + 1"
elif [[ "${sources[k]}" > "${current}" ]]; then
let "j = k - 1"
else
break
fi
done
# If the file at ${sources[k]} isn't the one we're looking for,
#+ check the element immediately before and after it.
if [[ "${sources[k]}" < "${current}" ]]; then
# Bash will let me slide when I try to print an element beyond its indices ...
let "k += 1"
elif [[ "${sources[k]}" > "${current}" && $k > 0 ]]; then
# ... but complains when I try to print an element at an index < 0
let "k -= 1"
fi
# If a match is not found ...
if [[ "${sources[k]}" == "${current}" ]]; then
# Since both arrays are sorted, I can remove all the elements
#+ in ${sources[@]} up to index k.
sources=(${sources[@]:k + 1})
# Proceed to the next iteration
continue
fi
# Else, add the file to the list of those to be removed
remove_files=(${remove_files[@]} ${src_file})
done
echo -e "${empty_row}\rFound ${#remove_files[@]} files to remove:"
if (( ${#remove_files[@]} )); then
for index in $(seq ${#remove_files[@]}); do
echo " ${index}) ${remove_files[index - 1]}"
done
echo -n | read # Clear the buffer (I had some issues)
echo -n "Would you like to remove all these? [Y|n|c]"
read ans # or `read -n 1 ans' if you prefer
case "$ans" in
""|[Yy]|[Yy][Ee][Ss])
for f2r in ${remove_files[@]}; do
rm "$f2r" || echo "cannot remove $f2r"
done
[Cc]|[Cc][Hh][Oo][Ss][Ee])
for f2r in ${remove_files[@]}; do
echo -n "${f2r}? [Y|n] "
echo -n | read # Clear the buffer, again
read ans
if [[ "$ans" == "" || "$ans" == [Yy] || "$ans" == [Yy][Ee][Ss] ]]; then
rm "$f2r" || echo "cannot remove $f2r"
fi
done
esac
fi
elif [[ "${SRCDEST}" == "" || ! -d "${SRCDEST}" ]]; then
echo "Your \$SRCDEST variable is invalid" 1>&2
echo "Be sure it's set correctly in your \`/etc/makepkg.conf'" 1>&2
exit 1
else
echo "Your \$ABSROOT variable is invalid" 1>&2
echo "Be sure you have \`abs' installed and that \`/etc/abs.conf' exists" 1>&2
exit 1
fi
(06/02/2009) If we depended on many little scripts to handle our package management, what would then be the purpose of package managers?
(06/02/2009) Minor edit -> changed `echo' to `echo -n' on line 90 of my script (superficial modification).
Last edited by deltaecho (2009-06-02 21:42:54) -
Functions to upload UNIX tab-delimited file
plz tell me lists of Functions to upload UNIX tab-delimited file in the database table
HI,
data : itab like standard table of ZCBU.
ld_file = p_infile.
OPEN DATASET ld_file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
IF sy-subrc NE 0.
ELSE.
DO.
CLEAR: wa_string, wa_uploadtxt.
READ DATASET ld_file INTO wa_string.
IF sy-subrc NE 0.
EXIT.
ELSE.
SPLIT wa_string AT con_tab INTO wa_uploadtxt-name1
wa_uploadtxt-name2
wa_uploadtxt-age.
MOVE-CORRESPONDING wa_uploadtxt TO wa_upload.
APPEND wa_upload TO it_record.
ENDIF.
ENDDO.
CLOSE DATASET ld_file.
ENDIF.
loop at it_record.
itab-field1 = it_reocrd-field1.
itab-field2 = it_record-field2.
append itab.
endloop.
*-- Now update the table
modify ZCBU from table itab. -
Format records in the MTA msllog file ?
Hi, All.
Where i can read about format records in the MTA msglog file ??
I need this for parse this log-file.
SergGreate !
Thank your !
Serg
> Hi Serg.,
>
> You might find what you're looking for in "Details of GroupWise Message
> Log files", document ID = 10087320.
>
> Cheers,
>
> Andy
>
> Serg wrote:
>
>> Hi, All.
>> Where i can read about format records in the MTA msglog file ??
>
>> I need this for parse this log-file.
>
>> Serg -
ast week I found records of purchases for 3 files using itunes, and left money for my credit card, do not make such purchases, what the **** is going on? itunes is not safe?
Contact iTunes support at the link below.
https://ssl.apple.com/emea/support/itunes/contact.html
Maybe you are looking for
-
how do i get all my music to transfer from an old pod to a new ipot touch? Shouldn't this be automatic when i login to itunes?
-
Logic doesn't (sound) right
Can anyone explain why my tracks sound like they're supposed to when I have them in soundtrack pro but in logic the tone of them is completely wrong? I don't know if there is some setting I'm missing but when I mix a handful of Identical tracks with
-
Why can't i hear out of my reciever?
I cannot hear from my reciever from my iphone 4.
-
How export only a part of timeline montage ?
In -Out Range Selection not work...share always export entire timeline. cheers silvio bonomi
-
How do I get synced photos from an iPhone?
I just visited a couple of friends and we took some photos with their camera. I wanted to bring the shots home with me so we loaded them into iPhoto on their iMac and created an album with the new photos. I then used iTunes on their iMac to sync that