Transferring data to a flat file with a length greater than 255 bytes??
Is there a way to do this? At the end of the month, my dataset will reach a length of anywhere between 271 and 335. Even though I have the transfer field setup with a length of 512, I am only getting 255 characters worth of data when I pull the flat file in from the server.
Has anyone discovered a way to handle this? I cannot break the record up into blocks of 255, the Transfer has to be able to handle something greater than a length of 255.
Many Thanks!
Tavares L. Phillips
OK - according to OSS note 626010:
Short text "TRANSFER f TO dataset" ignores LENGTH addition
Responsible SAP AG
Component BC-ABA-LA
Syntax, Compiler, Runtime
Long text
Symptom
In rare cases, the "TRANSFER f TO dataset" statement ignores the LENGTH
addition.
Other terms
DATASET, FILE
Reason and Prerequisites
This is caused by a kernel error.
Solution
The error is corrected for SAP_BASIS 6.20 using kernel patch 848.
Valid releases
Software Component Release
from to
SAP_BASIS SAP Basis component
610 - 620
It's an old note but...?
Rob
Similar Messages
-
Flat file with fixed lengths to XI 3.0 using a Central File Adapter---Error
Hi
According to the following link
/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter
In Adapter Monitor I got the following error,
In sender Adapter,
Last message processing started 23:47:35 2008-10-25, Error: Conversion of complete file content to XML format failed around position 0 with java.lang.Exception: ERROR converting document line no. 1 according to structure 'Substr':java.lang.Exception: Consistency error: field(s) missing - specify 'lastFieldsOptional' parameter to allow this
last retry interval started 23:47:35 2008-10-25
length 15,000 secs
some one help me out ?
Thanks
Ramfrom the blog you referenced -
<u> /people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter
<b>goto step 4</b>
<u>additional parameters</u>
add as the last entry
<recordset structure>.lastFieldsOptional Yes
e.g.,
Substr.lastFieldsOptional Yes -
Send a flat file with fixed lengths to XI 3.0 using a Central File Adapter?
Hello,
I'm wondering if someone have experience setting up conversion for different record structures. The example shown,
/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter,
(in a greate way) only picture one kind of structure.
How should it be done if the file would contain
10Mat1
20100PCS
The first record structure has columns
ID(2),Material(10)
The second redcord structure has columns
ID(2),Quantity(3), Unit of messure (3)
Brgds
Kalle
Message was edited by: Karl BergstromThe configuration would be like follows:
Content Conversion Parameters:
Document Name: <your message type name>
Document Namespace: <your message type namespace>
Document Offset: <leave empty>
Recordset Name: <any name>
Recordset Namespace: <leave empty>
Recordset Structure: row1,,row2,
Recordset Sequence: any
Recordsets per Message: *
Key Field Name: ID
Key Field Type: String
Parameters for Recordset Structures:
row1.fieldNames ID,Material
row1.fieldFixedLengths 2,10
row1.keyFieldValue 10
row2.fieldNames ID,Quantity,UOM
row2.fieldFixedLengths 2,3,3
row2.keyFieldValue 20
Instead of row1 and row2 you can choose any name.
Regards
Stefan -
Interface output file : tab limited vs flat file with fixed length
hey guys,
any idea on difference b/w to file type : flat file with fixed length or tab limited file
thanksTab Delimited:
Two Field are seperated by a TAB
eg. SANJAY SINGH
First field is First Name and Second is Sir Name.
Nth field will be after N -1 tab
Fixed Length:
Every field has a fixed starting position and length
eg. SANJAY SINGH
Here First field start from Position 1 and has lenght 10 and 2nd field start from 11th postion and has lenght 10.
Fixed Length -> The lenght of each field is fixed, while in tab delimited the lenght of field is not fixed but we know it ends when the Seperatot (Tab) is encountered. -
Uploading excel file into internal table with field length more than 255
I am trying to upload the data from an excel file through function module 'TEXT_CONVERT_XLS_TO_SAP'.
I have tested by changing the field type from string, and char2000.
But it is accepting only 255 chars from the cell content.
How to get the total content of the field if it is more than 255 char.hi,
you can use any of the following procedures:
For uploading data from excel to internal table refer standard report RC1TCG3Z in se38 :
or
You can use the FM 'ALSM_EXCEL_TO_INTERNAL_TABLE' itself. Please check if you have done it this way . But, this FM can be a little time consuming if the excel has large data, so you can use the FM u2018GUI_UPLOADu2019.
CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
EXPORTING
filename = p_file1
i_begin_col = l_b1
i_begin_row = l_c1
i_end_col = l_b2
i_end_row = l_c2
TABLES
intern = lt_data
EXCEPTIONS
inconsistent_parameters = 1
upload_ole = 2
OTHERS = 3.
IF sy-subrc <> 0.
MESSAGE e018 DISPLAY LIKE 'i'.
ENDIF.
*---Removing the first heading fields from the file.
IF NOT lt_data[] IS INITIAL.
LOOP AT lt_data INTO lwa_data WHERE row = '0001'.
DELETE lt_data.
CLEAR lwa_data.
ENDLOOP.
*---Inserting the data from file the internal table
LOOP AT lt_data INTO lwa_data.
CASE lwa_data-col.
WHEN 1.
wa_file_wicopa-serial = lwa_data-value.
WHEN 2.
wa_file_wicopa-blart = lwa_data-value.
WHEN 3.
wa_file_wicopa-bldat = lwa_data-value.
WHEN 4.
wa_file_wicopa-budat = lwa_data-value.
ENDCASE.
AT END OF row.
APPEND wa_file_wicopa TO gt_file_wicopa.
CLEAR wa_file_wicopa.
ENDAT.
CLEAR lwa_data.
ENDLOOP.
ENDIF.
ENDIF.
or
DATA: it_test TYPE STANDARD TABLE OF alsmex_tabline WITH HEADER LINE.
DATA :v_start_col TYPE i VALUE '1',
v_start_row TYPE i VALUE '1',
v_end_col TYPE i VALUE '256',
v_end_row TYPE i VALUE '65536',
v_text TYPE repti.
* Funtion Module to upload values from excel to the Internal table
CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
EXPORTING
filename = p_file
i_begin_col = v_start_col
i_begin_row = v_start_row
i_end_col = v_end_col
i_end_row = v_end_row
TABLES
intern = it_test
EXCEPTIONS
inconsistent_parameters = 1
upload_ole = 2
OTHERS = 3.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
Or
You can use FM u201CTEXT_CONVERT_XLS_TO_SAPu201D.
DATA : i_raw TYPE truxs_t_text_data.
CALL FUNCTION 'TEXT_CONVERT_XLS_TO_SAP'
EXPORTING
i_field_seperator = 'X'
i_tab_raw_data = i_raw
i_filename = p_path
TABLES
i_tab_converted_data = itab
EXCEPTIONS
conversion_failed = 1
OTHERS = 2
hope it will help u
regards
rahul -
Email attachement (EXCEL. CVS FILE) greater than 255 bytes
I have read the postings on this topic - Thomas Jung - you appear to be the expert on this topic. My concern is breaking up the file into chunks and then being sure SAP knows how to re-assemble them when transferred. Could anyone give me some sample code on this and the FM's required (I know the FM is SO_NEW_DOCUMENT_ATT_SEND_API1 to send it and it works great except for the 255 limit). I am under the gun to get this resolved ASAP. I know SAP can handle it - I go into SAP Office and have no trouble e-mailing large attachments to external e-mail accounts - actually I have been told to run debug when I do this and see how SAP handles it (a long debug session I am sure). Any help would be most appreciated and get mucho reward points. Thanks!!
I just wrote a huge, long post on this but got a Java Server error when I hit Post Message. Arg!
Let me try and re-summarize. You shouldn't be concerned about how SAP will re-assemble the SOLI structure. If when you convert your data to CVS format you properly insert new line characters, this will be the place that Excel uses to parse the file. You see there is an assumption that the end of the line in an ABAP internal table always means a new line in a downloaded file. This is probably because all the SAP supplied download functions do this for you by default. However what they are doing is adding this new line character for you.
Now how you get to the end point of what you want, depends upon a few things. What release are you on? Do you already have your data in CVS (with newlines) or do you need to get it there from an internal table? Is your data in a string?
If I knew the answers to these questions I might be able to help out. I have code sample for 620 and higher that should dynamically convert your internal table to Tab Delimited string (also good for download to excel or easy to modify to CVS). Once in the string it is quite easy to use the SAP function module SCMS_STRING_TO_FTEXT to get you to the SOLI main structure.
Let me know if you need more details. -
How to handle flat file with variable delimiters in the file sender adapter
Hi friends,
I have some flat files in the ftp server and hope to poll them into XI, but before processing in XI, I hope to do some content conversion in the file sender adapter, according to the general solution, I just need to specify the field names, field seperator, end seperator, etc. But the questions is:
The fileds in the test data may have different amount of delimiters (,), for example:
ORD01,,,Z4XS,6100001746,,,,,2,1
OBJ01,,,,,,,,,,4,3
Some fileds only have 1 ',' as the delimiter, but some of them have multiple ','.
How can I handle it in the content conversion?
Regards,
BeanHi Bing,
Plz do refer the following blogs u will get an idea:
File content conversion Blogs
/people/venkat.donela/blog/2005/03/02/introduction-to-simplefile-xi-filescenario-and-complete-walk-through-for-starterspart1
/people/venkat.donela/blog/2005/03/03/introduction-to-simple-file-xi-filescenario-and-complete-walk-through-for-starterspart2
/people/arpit.seth/blog/2005/06/02/file-receiver-with-content-conversion
/people/anish.abraham2/blog/2005/06/08/content-conversion-patternrandom-content-in-input-file
/people/shabarish.vijayakumar/blog/2005/08/17/nab-the-tab-file-adapter
/people/venkat.donela/blog/2005/03/02/introduction-to-simplefile-xi-filescenario-and-complete-walk-through-for-starterspart1
/people/venkat.donela/blog/2005/03/03/introduction-to-simple-file-xi-filescenario-and-complete-walk-through-for-starterspart2
/people/venkat.donela/blog/2005/06/08/how-to-send-a-flat-file-with-various-field-lengths-and-variable-substructures-to-xi-30
/people/anish.abraham2/blog/2005/06/08/content-conversion-patternrandom-content-in-input-file
/people/shabarish.vijayakumar/blog/2005/08/17/nab-the-tab-file-adapter
/people/jeyakumar.muthu2/blog/2005/11/29/file-content-conversion-for-unequal-number-of-columns
/people/shabarish.vijayakumar/blog/2006/02/27/content-conversion-the-key-field-problem
/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter
/people/arpit.seth/blog/2005/06/02/file-receiver-with-content-conversion
http://help.sap.com/saphelp_nw04/helpdata/en/d2/bab440c97f3716e10000000a155106/content.htm
Regards,
Vinod. -
How to generate blank spaces at end of the record in a flat file with fixed
Hi,
I am generating a flat file with fixed length.
In my ABAP program, i am able to see the spaces at the end of the recors in debug but when download to applicaiton server i am not able to see those spaces.
How can i generate blank spaces at the end of the record in a flat file?
Please update
Thank youHow are you downloading the file? And, How are you looking at the file on the application server?
Can you provide snippets of your code?
Cheers
John -
Flat file with chinese characters
Hi all,
I am working on a solution to map a file with this structure (not xml):
//.comment 1
0~keyh2..hn~
0~it1it2..itn~key
0~it1it2..itn~key
//.comment 2
0~keyh2..hn~
0~it1it2..itn~key
0~it1it2..itn~key
0~it1it2..itn~key
This is my conversion setup
recordser.structure = comment,1,header,1,item,*
recordset.sequence = variable
keyFieldName = key
comment.fieldSeparator = .
comment.fieldStructure = key.comment
comment.keyFieldValue = //
header.fieldSeparator = ~~
header.beginSeparator = 0~~
header.endSeparator = ~~
header.fieldStructure = 0~keyh2..hn~
header.keyFieldValue = 0
item.fieldSeparator = ~~
item.beginSeparator = 0~~
item.fieldStructure = 0~it1it2..itn~key
item.keyFieldValue = 1
The problem now is that this file come from Chinese system and it is provide with chinese letters (looks like 2 bytes per letter). When I provide character encoding as ISO-2022 then adapter shows exception:
java.io.UnsupportedEncodingException
when I try to process it without passing any encoding then the exception is:
more elements in file csv structure than field names specified
Is there anyone who can help me with these?
br
DawidHi,
I think it is something wrong with the File content conversion parameters parameters.
You can avoid comment 1 and comment 2 by using the parameter Document offset.Follow this link for that:
<a href="http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm">http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm</a>.
I think you didn't specify the field names in the File Content Conversion parameters.
Follow these two weblogs for the File Content Conversion parameters:
<a href="/people/venkat.donela/blog/2005/06/08/how-to-send-a-flat-file-with-various-field-lengths-and-variable-substructures-to-xi-30:///people/venkat.donela/blog/2005/06/08/how-to-send-a-flat-file-with-various-field-lengths-and-variable-substructures-to-xi-30
<a href="/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter">/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter</a>
Hope it helps.
Regards,
JE -
Data loading from flat file to cube using bw3.5
Hi Experts,
Kindly give me the detailed steps with screens about Data loading from flat file to cube using bw3.5
...............PleaseHi ,
Procedure
You are in the Data Warehousing Workbench in the DataSource tree.
1. Select the application components in which you want to create the DataSource and choose Create DataSource.
2. On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
The DataSource maintenance screen appears.
3. Go to the General tab page.
a. Enter descriptions for the DataSource (short, medium, long).
b. As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
c. Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
4. Go to the Extraction tab page.
a. Define the delta process for the DataSource.
b. Specify whether you want the DataSource to support direct access to data.
c. Real-time data acquisition is not supported for data transfer from files.
d. Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
Choose Properties if you want to display the general adapter properties.
e. Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
f. Depending on the adapter and the file to be loaded, make further settings.
■ For binary files:
Specify the character record settings for the data that you want to transfer.
■ Text-type files:
Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
Specify the character record settings for the data that you want to transfer.
For ASCII files:
If you are loading data from an ASCII file, the data is requested with a fixed data record length.
For CSV files:
If you are loading data from an Excel CSV file, specify the data separator and the escape character.
Specify the separator that your file uses to divide the fields in the Data Separator field.
If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
g. Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
h. Make the settings for currency conversion, as required.
i. Make any further settings that are dependent on your selection, as required.
5. Go to the Proposal tab page.
This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
a. Specify the number of data records that you want to load and choose Upload Sample Data.
The data is displayed in the upper area of the tab page in the format of your file.
The system displays the proposal for the field list in the lower area of the tab page.
b. In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
6. Go to the Fields tab page.
Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
a. To define a field, choose Insert Row and specify a field name.
b. Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
c. Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
d. Change the data type of the field if required.
e. Specify the key fields of the DataSource.
These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
f. Specify whether lowercase is supported.
g. Specify whether the source provides the data in the internal or external format.
h. If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
i. If required, specify a conversion routine that converts data from an external format into an internal format.
j. Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
k. Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
l. Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
7. Check, save and activate the DataSource.
8. Go to the Preview tab page.
If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
This function allows you to check whether the data formats and data are correct.
For More Info: http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm -
Error while uploading data from a flat file to the hierarchy
Hi guys,
after i upload data from a flat file to the hierarchy, i get a error message "Please select a valid info object" am loading data using PSA, having activated all external chars still get the problem..some help on this please..
regards
Srithere is o relation of infoobject name in flat file and infoobjet name at BW side.
please check with the object in the BW and their lengths and type of the object and check your flat file weather u have the same type there,
now check the sequence of the objects in the transfer rules and activate them.
there u go. -
How to create flat file with fixed lenght records
I need help to export an Oracle table to a flat file with fixed lenght and without columns separator.
the fixed length is the more important demand.
My table have 50 columns with varchar, date and number .
Date and number columns may be empty, null o with values.
Thanks a lot for any help.
[email protected]Hi,
You can use this trick:
SQL>desc t
Name Null? Type
NAME VARCHAR2(20)
SEX VARCHAR2(1)
SQL>SELECT LENGTH(LPAD(NAME,20,' ')||LPAD(SEX,1,' ')), LPAD(NAME,20,' ')||LPAD(SEX,1,' ') FROM T;
LENGTH(LPAD(NAME,20,'')||LPAD(SEX,1,'')) LPAD(NAME,20,'')||LPA
21 aF
21 BM
21 CF
21 DM
4 rows selected.
SQL>SELECT * FROM t;
NAME S
a F
B M
C F
D M
4 rows selected.Regards -
What is the best way to load and convert data from a flat file?
Hi,
I want to load data from a flat file, convert dates, numbers and some fields with custom logic (e.g. 0,1 into N,Y) to the correct format.
The rows where all to_number, to_date and custom conversions succeed should go into table STG_OK. If some conversion fails (due to an illegal format in the flat file), those rows (where the conversion raises some exception) should go into table STG_ERR.
What is the best and easiest way to archive this?
Thanks,
Carsten.Hi,
thanks for your answers so far!
I gave them a thought and came up with two different alternatives:
Alternative 1
I load the data from the flat file into a staging table using sqlldr. I convert the data to the target format using sqlldr expressions.
The columns of the staging table have the target format (date, number).
The rows that cannot be loaded go into a bad file. I manually load the data from the bad file (without any conversion) into the error table.
Alternative 2
The columns of the staging table are all of type varchar2 regardless of the target format.
I define data rules for all columns that require a later conversion.
I load the data from the flat file into the staging table using external table or sqlldr without any data conversion.
The rows that cannot be loaded go automatically into the error table.
When I read the data from the staging table, I can safely convert it since it is already checked by the rules.
What I dislike in alternative 1 is that I manually have to create a second file and a second mapping (ok, I can automate this using OMB*Plus).
Further, I would prefer using expressions in the mapping for converting the data.
What I dislike in alternative 2 is that I have to create a data rule and a conversion expression and then keep the data rule and the conversion expression in sync (in case of changes of the file format).
I also would prefer to have the data in the staging table in the target format. Well, I might load it into a second staging table with columns having the target format. But that's another mapping and a lot of i/o.
As far as I know I need the data quality option for using data rules, is that true?
Is there another alternative without any of these drawbacks?
Otherwise I think I will go for alternative 1.
Thanks,
Carsten. -
Export HRMS data to a flat file
Hi All!
Are there any ways of exporting employee related data to a flat file without using a client app (PeopleCode or Integration Broker), that is simply generate a CSV feed from UI?You can Schedule a query and specify the output format as text. Note that when you select View Log/Trace in process monitor, you will see a file with a .csv extension. However, it will open by default in Excel, and even if you select Save instead of Open it will try to change the extension to .xls. You will have to change it back to .csv.
-
Hi all,
I need to export a table data to a flat file.
But the problem is that the data is huge about 200million rows occupying around 60GB of space
If I use SQL*Loader in Toad, it is taking huge time to export
After few months, I need to import the same data again into the table
So please help me which is the efficient and less time taking method to do this
I am very new to this field.
Can some one help me with this?
My oracle database version is 10.2
Thanks in advanceOK so first of all I would ask the following questions:
1. Why must you export the data and then re-import it a few months later?
2. Have you read through the documentation for SQLLDR thoroughly ? I know it is like stereo instructions but it has valuable information that will help you.
3. Does the table the data is being re-imported into have anything attached to it eg: triggers, indices or anything that the DB must do on each record? If so then re-read the the sqlldr documentation as you can turn all of that off during the import and re-index, etc. at your leisure.
I ask these questions because:
1. I would find a way for whatever happens to this data to be accomplished while it was in the DB.
2. Pumping data over the wire is going to be slow when you are talking about that kind of volume.
3. If you insist that the data must be dumped, massaged, and re-imported do it on the DB server. Disk IO is an order of magnitude faster then over the wire transfer.
Maybe you are looking for
-
Problems Force Running a batch file
Hi There, I am trying to force-run a batch file to copy files from a local NBO Appliance to the C:\tmp directory on the user's workstation. The batch file lives at c:\tmp\branding.bat. I cant get the batch file to force run. Zen reporting advises tha
-
How to automate the payment program proposal creation
Hi Experts, Another question, as we have to automate the creation of the payment run and its proposal run and then the list should go to specified person, who will check and then the payment run will be executed manually in the system. How to automat
-
Can any one share the Oracle Reports 6i Developer demo table scripts?
Hi, I have to learn the Report 6i for my office project, I have installed the report 6i developer but I could not get the Demo CD, can any one share the Scripts for table (viz. Stock, stock_history , indcat ) creation and also the insert script for d
-
Hi Guys, I have a question: Can I install an XI3.0 NW04 in a MCOD Server that have a one NW App Server Abap like server for Web Developments and one EP6.0? I have this reasonable doubt, because in XI3.0 Installing Guide dont tell about the installati
-
Since I am on 26.4 kbs dialup and the first time I tried Su my dialup died and I had to try again, but was locked out. So then I tried pacman -U on one file at a time I started with db-4.6.21-, this went fine but when it finished and updated the syst