Importing Data enclosed in quotes
Hi all,
Im trying to import a load of data into my application via the import utility, that is in a comma seperated text file. Many of the data values are lines of text which can also contain commas, so therefore each data value is enclosed in quotes to retain these values and ensure the column ordering when the data is imported.
For example:
column1, column2, column3
"value1", "value2", "value3" -- row 1
"value4", "value5", "value6" -- row 2
If I import this data into my app, each data value is still enclsoed by the quotes, which I dont want. So my question is, is there any way to strip out these quotes as the data is being imported to just leave value1 etc. in my table and not "value1" ?
Dear user512101
<p>
did you ever find a solution to this problem? Anyone?
<p>
I am facing the exact same issue on a regular basis. So far, I have been "cheating" by importing data through the ODBC connection, using an MS Access client that is still limping along (not for long, says my DBA).
<p>
If I don't, a typical column would look something like this:<p>
"Exceptional biological diversity, WPC ecoregion reference for 71i, Old Hickory Lock 5 Wildlife Refuge "<p>
"Chickasaw State Forest, Wildlife Management Area and State Park"<p>
"Federal endangered Birdwing Pearlymussel, Fine-rayed Pigtoe, Shiny Pigtoe, Cumberland Monkeyface, Cracking Pearlymussel, Boulder Darter, and state threatened Ashy Darter." <p>etc. etc.
<p>
Vojin
Similar Messages
-
when a text field in a .csv file enclosed in quotes contains a comma, the quotes are ignored and the field is viewed as 2 fields by the import proceedure,
i.e.: 'text1, text2' is imported as 2 seperate fields, 'text1' and 'text2'I am using ver. 1.5.1.
When using the import data utililty on the GUI selected when you left click on a table name, if a quoted field contains a comma the utility splits the field into multiple fields:
example:
Error starting at line 5 in command:
INSERT INTO DISTRICT (DIST_NUMBER, ADDRESS1, ADDRESS2, ADDRESS3, ADDRESS4, CITY, STATE, ZIP_CODE)
VALUES ('20', '''NASHVILLE #020''', '''3308 BRILEY PARK BLVD.', ' SOUTH''', '''NASHVILLE BUSINESS CENTER 1''', '''''', '''NASHVILLE''', '''TN''')
Error report:
SQL Error: ORA-12899: value too large for column "GECBBA6"."DISTRICT"."STATE" (actual: 11, maximum: 3)
12899. 00000 - "value too large for column %s (actual: %s, maximum: %s)"
*Cause: An attempt was made to insert or update a column with a value
which is too wide for the width of the destination column.
The name of the column is given, along with the actual width
of the value, and the maximum allowed width of the column.
Note that widths are reported in characters if character length
semantics are in effect for the column, otherwise widths are
reported in bytes.
*Action: Examine the SQL statement for correctness. Check source
and destination column data types.
Either make the destination column wider, or use a subset
of the source column (i.e. use substring).
original input:
'20','NASHVILLE #020','3308 BRILEY PARK BLVD., SOUTH','NASHVILLE BUSINESS CENTER 1','','NASHVILLE','TN','37207'
as you can see '3308 BRILEY PARK BLVD., SOUTH' is one field, yet the import utility views it as two fields '3308 BRILEY PARK BLVD.' and 'SOUTH' -
Hi All,
When iam importing data into excel, leading zero's are truncating.
Eg: I have a column orgid in a table. Data is like 00067509,056429...
When importing into excel, leading zeros are truncated and orgid data shown in excel as 67509,56429...
Please help to get zero's in excel.you did not post info how you create the data to be available for excel. it is important to us that you also describe how you create the data. i will assume that you create the text file on oracle and i'll try to give an example as a workaround that might work to you. we create a .csv file that can be opened to excel and each of the column is enclosed by double quotes and separated by comma. the column whose values had leading zeroes we added an equal sign before enclosing them to a double quotes (e.g. ="000123456") this way when it is opened on excel it preserves the leading zeroes.
. -
Dynamic SQL and Data with Single Quotes in it.
Hi There,
I have a problem in that I am using dynamic SQL and it happens that one of the columns does contain single quotes (') in it as part of the data. This causes the resultant dynamic SQL to get confused as the single quote that is part of the data is taken to mean end of sting, when in fact its part of the data. This leaves out a dangling single quote that was meant to enclose the string. Here is my dynamic SQL and the result of the parsed SQL that I have captured:
****Dynamic SQL*****
l_sql:='select NOTE_TEMPLATE_ID '||
'FROM TMP_NOTE_TEMPLATE_VALUES '||
'where TRIM(LEGACY_NOTE_CODE)='''||trim(fp_note_code)||''' '||
'and TRIM(DISPLAY_VALUE)='''||trim(fp_note_text)||''' ';
execute immediate l_sql INTO l_note_template_id;
Because the column DISPLAY_VALUE contains data with single quotes, the resultant SQL is:
******PARSED SQL************
select NOTE_TEMPLATE_ID
FROM TMP_NOTE_TEMPLATE_VALUES
where TRIM(LEGACY_NOTE_CODE)='INQ' and TRIM(DISPLAY_VALUE)='Cont'd'
And the problem lies with the single quote between teh characters t and d in the data field for DISPLAY_ITEM. How can I handle this?
Many thanks,I have been reliably informed that if one doesn't enclose char/varchar2 data items in quotes, the right indices may not be usedI am into oracle for past 4 years and for the first time i am hearing this.
Your reliable source is just wrong. Bind variables are variables that store your value and which are used in SQL. They are the proper way to use values in your SQL. By default all variables in PL/SQL is bind variable.
When you can do some thing in just straight SQL just do it. Dynamic SQL does not make any sense to me here.
Thanks,
Karthick. -
IMPORT DATA TO USER DEFINED TABLESPACE
I HAVE FACED A PROBLEM REGARDING IMPORT DATA IN A USER DEFINED TABLESPACE.FOR DOING THIS AT FIRST I HAVE CREATED A TABLESPACE SAY USERS ASSOCIATED WITH A DATAFILE NAME AND PATH IS 'D:\ORACLE\ORADATA\ORCL\USERS01.DBF' ITS SIZE IS 200M.IN THE SAME PATH THERE IS A TEMPORARY TABLESPACE ASSOCIATED
WITH A DATAFILE NAMED 'D:\ORACLE\ORADATA\ORCL\TEMP01.DBF' ITS SIZE IS 80M.
THEN ASBESCO USER CREATED IN USERS TABLESPACE THROUGH THIS SQL COMMAND.
CREATE USER ASBESCO IDENTIFIED BY ASBESCO
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
QUOTA UNLIMITED ON USERS
QUOTA UNLIMITED ON TEMP;
GRANT PRIVILAGE TO ASBESCO USER BY THIS SQL .
GRANT CONNECT,RESOURCE,DBA TO ASBESCO;
MY EXPORT FILE NAME IS ASBESCO.DMP WHICH IS EXPORTED FROM ANOTHER MACHINE. I WANT TO IMPORT THIS INTO MY MACHINE'S SPECIFIED TABLESPACE. ITS SIZE IS APPROX 26 MB.
NOW I MPORT ASBESCO.DMP FILE THROUGH THIS COMMAND..
IMP SYSTEM/<SYSTEM PASSWORD> FILE= ASBESCO.DMP FROMUSER=ASBESCO TOUSER=ASBESCO.
AFTER SUCCESSFULL IMPORT OPERATION I HAVE SEEN ALL OBJECTS IN ASBESCO USER ARE CREATED IN SYSTEM TABLESPACE NOT USERS TABLESPACE. WHY?
IT IS TO BE NOTED THAT I HAVE NOT MENTIONED THE DEFAULT TABLESPACE 'USERS' WHEN I CREATED MY USER OBJECTS.Please change to lower case letters
If the user has UNLIMITYED tablespace privilige
Revoke it
If the user has a privelegs on a tablespace
Revoke it
Create a new tablespace and assign it to the user as Default tablespace.
then import -
Importing data through javascript
I am attempting to populate fields that are generated through a static pdf, but that all contain the same type of data.
All of this data is selectable and can be copied out of the source, so I figured that it would be possible to pull it through a javascript command.
Currently when you view the pdf file in notepad this appears:
(Notes: ) Tj
/F14 10.56 Tf
1 0 0 1 228 668.64 Tm
-0.107 Tc 0.052 Tw (Jim's Taxi quote 123) Tj
1 0 0 1 22.08 702.24 Tm
-0.122 Tc 0.067 Tw (Jim Pinto) Tj
1 0 0 1 22.08 690.24 Tm
-0.071 Tc 0.017 Tw (Jim's Taxi.) Tj
1 0 0 1 22.08 678.24 Tm
-0.077 Tc 0.022 Tw (123 main st) Tj
1 0 0 1 22.08 666.24 Tm
-0.085 Tc 0.031 Tw (Houston) Tj
1 0 0 1 22.08 654.24 Tm
-0.103 Tc 0.048 Tw (TX) Tj
/F15 12 Tf
1 0 0 1 411.36 154.56 Tm
-0.091 Tc 0 Tw
I already have a button attaching the file to the document. I would also like for that button to read the values in the notes section of the pdf. Every item I need ALWAYS appears after the Tw and occurs in brackets; Tw (*) while the numbering changes.
So the command needs to read the file, find the (Notes: ) section, copy the first value (Jim's Taxi quopte 123) and enter it into a field, find the next value (Jim Pinto) and copy it into the name field etc. In this way, by simply selecting the file, regardless of the pdf's orgins and have it populate the data field predetermined (i.e the first field is ALWAYS the customers company name.)
Can anyone here provide a little insight on the way to go about doing this? I am getting a bit stuck.Hi Gaurav,
Thanks for your reply. I know about export button but my requirement is to import data into other web based application through HTTP post.
Thanks,
Arvind -
Data with "," overflows to the next field when importing data with DTW
Dear all,
When I tried to import data with DTW, for the data that includes ",", the data after "," all jumps into the next field after import.
E.g. Column A(memo): bank, United state
Column B(project code): blank
Result in SBO: Memo field : bank
Project code: United state
I have tried 2 ways.
1. Save the file as csv, and adding double quote(") at the beginning and ending of the data. However, it didn't work. It didn't work with single quote as well.
2. Save the file as txt, without amending the data at all.
It worked, however, in SBO, double quote is auto added, it becomes "bank, United state".
Please kindly advise how to handle such case.
Thank you.
Regards,
JulieHi,
Check this if it is useful :
Upload of BP Data using DTW
Rgds,
Jitin -
Essbase data error importing data back to cube.
Hi
We have some text measures in cube.We loaded data using smart view.Once the data is loaded.we did export data from cube and when i import data back to the same cube i am getting the following error message below.
Parallel data load enabled: [1] block prepare threads, [1] block write threads.
Member [Sample text Measure] is a text measure. Only text values can be loaded to the Smart List in a text measure. [1004] Records Completed
Unexpected Essbase error 1003073.
I have done some analysis on this error message but could not find any solution.
please suggest me where it is going wrong.
Thanks in advance
Kris
Edited by: user9363364 on Apr 19, 2010 5:19 PM
Edited by: user9363364 on Apr 20, 2010 3:38 PM
Edited by: user9363364 on Apr 21, 2010 1:55 PMHi,
I don't know if I'm directing you correctly as I only loaded data, but never imported the exported text data. However, from the 11.1.1.3 dbag, below is found on P#193:
*"100-10" "New York" "Cust Index" #Txt:"Highly Satisfied"*
This shows that- Perhaps, we may've to prefix *#Txt:* before the double-quoted Text value.
Wanna give a try? -
How to update link and import data of relocated incx file into inca file?
Subject : <br />how to update link and import data of relocated incx file into inca file.?<br />The incx file was originally part of the inca file and it has been relocated.<br />-------------------<br /><br />Hello All,<br /><br />I am working on InDesignCS2 and InCopyCS2.<br />From indesign I am creating an assignment file as well as incopy files.(.inca and .incx file created through exporing).<br />Now indesign hardcodes the path of the incx files in inca file.So if I put the incx files in different folder then after opening the inca file in InCopy , I am getting the alert stating that " The document doesn't consists of any incopy story" and all the linked story will flag a red question mark icon.<br />So I tried to recreate and update the links.<br />Below is my code for that<br /><br />//code start*****************************<br />//creating kDataLinkHelperBoss<br />InterfacePtr<IDataLinkHelper> dataLinkHelper(static_cast<IDataLinkHelper*><br />(CreateObject2<IDataLinkHelper>(kDataLinkHelperBoss)));<br /><br />/**<br />The newFileToBeLinkedPath is the path of the incx file which is relocated.<br />And it was previously part of the inca file.<br />eg. earlier it was c:\\test.incx now it is d:\\test.incx<br />*/<br />IDFile newIDFileToBeLinked(newFileToBeLinkedPath);<br /><br />//create the datelink<br />IDataLink * dlk = dataLinkHelper->CreateDataLink(newIDFileToBeLinked);<br /><br />NameInfo name;<br />PMString type;<br />uint32 fileType;<br /><br />dlk->GetNameInfo(&name,&type,&fileType);<br /><br />//relink the story <br />InterfacePtr<ICommand> relinkCmd(CmdUtils::CreateCommand(kRestoreLinkCmdBoss)); <br /><br />InterfacePtr<IRestoreLinkCmdData> relinkCmdData(relinkCmd, IID_IRESTORELINKCMDDATA);<br /><br />relinkCmdData->Set(database, dataLinkUID, &name, &type, fileType, IDataLink::kLinkNormal); <br /><br />ErrorCode err = CmdUtils::ProcessCommand(relinkCmd); <br /><br />//Update the link now <br />InterfacePtr<IUpdateLink> updateLink(dataLinkHelper, UseDefaultIID()); <br />UID newLinkUID; <br />err = updateLink->DoUpdateLink(dl, &newLinkUID, kFullUI); <br />//code end*********************<br /><br />I am able to create the proper link.But the data which is there in the incx file is not getting imported in the linked story.But if I modify the newlinked story from the inca file,the incx file will be getting update.(all its previous content will be deleted.)<br />I tried using <br />Utils<IInCopyWorkflow>()->ImportStory()<br /> ,But its import the incx file in xml format.<br /><br />What is the solution of this then?<br />Kindly help me as I am terribly stuck since last few days.<br /><br />Thanks and Regards,<br />Yopangjo
>
I can say that anybody with
no experience could easily do an export/import in
MSSQLServer 2000.
Anybody with no experience should not mess up my Oracle Databases ! -
Object reference not set to an instance of an object error with Import data
Hi Experts,
We are using BPC 7.5M with SQL Server 2008 in Multiserver environment, I am getting an error "Object reference not set to an instance of an object." while running Import data package, earlier we use to get this error sometime(once in a month) but it goes away if we reboot the application server but this time I have rebotted the Application server multiple times but still getting the same error.
Please Advice.
Thanks & Regards,
RohitHi Rohit,
please see the sap note 1615837, maybe this help you.
Best regards
Roberto Vidotti -
How to import data from a text file into a table
Hello,
I need help with importing data from a .csv file with comma delimiter into a table.
I've been struggling to figure out how to use the "Import from Files" wizard in Oracle 10g web-base Enterprise Manager.
I have not been able to find a simple instruction on how to use the Wizard.
I have looked at the Oracle Database Utilities - Overview of Oracle Data Pump and the Help on the "Import: Files" page.
Neither one gave me enough instruction to be able to do the import successfully.
Using the "Import from file" wizard, I created a Directory Object using the Create Directory Object button. I Copied the file from which i needed to import the data into the Operating System Directory i had defined in the Create Directory Object page. I chose "Entire files" for the Import type.
Step 1 of 4 is the "Import:Re-Mapping" page, I have no idea what i need to do on this page. All i know i am not tying to import data that was in one schema into a different schema and I am not importing data that was in one tablespace into a different tablespace and i am not R-Mapping datafiles either. I am importing data from a csv file.
For step 2 of 4, "Import:Options" page, I selected the same directory object i had created.
For step 3 of 4, I entered a job name and a description and selected Start Immediately option.
What i noticed going through the wizard, the wizard never asked into which table do i want to import the data.
I submitted the job and I got ORA-31619 invalid dump file error.
I was sure that the wizard was going to fail when it never asked me into which table do i want to import the data.
I tried to use the "imp" utility in command-line window.
After I entered (imp), i was prompted for the username and the password and then the buffer size as soon as i entered the min buffer size I got the following error and the import was terminated:
C:\>imp
Import: Release 10.1.0.2.0 - Production on Fri Jul 9 12:56:11 2004
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Username: user1
Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Import file: EXPDAT.DMP > c:\securParms\securParms.csv
Enter insert buffer size (minimum is 8192) 30720> 8192
IMP-00037: Character set marker unknown
IMP-00000: Import terminated unsuccessfully
Please show me the easiest way to import a text file into a table. How complex could it be to do a simple import into a table using a text file?
We are testing our application against both an Oracle database and a MSSQLServer 2000 database.
I was able to import the data into a table in MSSQLServer database and I can say that anybody with no experience could easily do an export/import in MSSQLServer 2000.
I appreciate if someone could show me how to the import from a file into a table!
Thanks,
Mitra>
I can say that anybody with
no experience could easily do an export/import in
MSSQLServer 2000.
Anybody with no experience should not mess up my Oracle Databases ! -
Performance issue with FDM when importing data
In the FDM Web console, a performance issue has been detected when importing data (.txt)
In less than 10 seconds the ".txt" and the ".log" files are created the INBOX folder (the ".txt" file) and in the OUTBOX\Logs (the ".log" file).
At that moment, system shows the message "Processing, please wait” during 10 minutes. Eventually the information is displayed, however if we want to see the second page, we have to wait more than 20 seconds.
It seems a performance issue when system tries to show the imported data in the web page.
It has been also noted that when a user tries to import a txt file directly clicking on the tab "Select File From Inbox", the user has to also wait other 10 minutes before the information is displayed on the web page.
Thx in advance!
Cheers
MatteoHi Matteo
How much data is being imported / displayed when users are interacting with the system.
There is a report that may help you to analyse this but unfortunately I cannot remember what it is called and don't have access to a system to check. I do remember that it breaks down the import process into stages showing how long it takes to process each mapping step and the overall time.
I suspect that what you are seeing is normal behaviour but that isn't to say that performance improvements are not possible.
The copying of files is the first part of the import process before FDM then starts the import so that will be quick. The processing is then the time taken to import the records, process the mapping and write to the tables. If users are clicking 'Select file from Inbox' then they are re-importing so it will take just as long as it would for you to import it, they are not just asking to retrieve previously imported data.
Hope this helps
Stuart -
Error when running import data package, BPC 7.5 Microsoft
When I try to import data to one of my applications the import fails. This happens with external data but also with data I extracted from the application (input via schedule and then exported) I get the below error:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TOTAL STEPS 2
1. Convert Data: completed in 0 sec.
2. Load and Process: Failed in 1 sec.
3. Import: completed in 1 sec.
[Selection]
FILE=\GRP_CONSOL\ZHYP_DATA\DataManager\DataFiles
2011_test_schedule.txt
TRANSFORMATION=\GRP_CONSOL\ZHYP_DATA\DATAMANAGER\TRANSFORMATIONFILES\SYSTEM FILES\IMPORT.XLS
CLEARDATA= Yes
RUNLOGIC= Yes
CHECKLCK= Yes
[Messages]
Convert Data
Success
Record Count : 12
Accept Count : 12
Reject Count : 0
Skip Count : 0
Incorrect syntax near '.'.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What can cause this error?
MarcoHi Marco,
See note [https://service.sap.com/sap/support/notes/1328907].
This is issue is related to the Replace and Clear option and not having Work Status set up correctly.
When using the option 'Replace & Clear' with Import packages, the WorkStatus Setting should be not only enabled but also 3 dimensions types 'C' (i.e. Category), 'E' (i.e. Entity) and 'T' (i.e. Time) should be set by 'Yes' or 'Owner'.
Thanks,
John -
Need to add information/data to the Quote Print output for a quote
I am trying to modify the print quote output generated after selecting a quote on the quote form. I need to know how to add information to this output.
I am not sure where the output is selected from. How do I modify the select to include information from other tables in oracle. I am using the ASOPRINT.xsl routine. Any help would be greatly appreciated.
Charles HardingHi,
From the xsl file I could figure out that the data for the quote lines is generated from the line <xsl:call-template name="line.table">. I would like to have some conditions added to it so that I can have particular type of items displayed on the PDF output. I am unable to figure out where I can add these conditions. Could someone help me in this? Any help would be appreciated.
Thank You,
Sowjanya -
Beginner trying to import data from GL_interface to GL
Hello I’m Beginner with Oracle GL and I ‘m not able to do an import from GL_Interface, I have putted my data into GL_interface but I think that something is wrong with it Becose when I try to import data
with Journals->Import->run oracle answer me that GL_interface is empty!
I think that maybe my insert is not correct and that's why oracle don't want to use it... can someone help me?
----> I have put the data that way:
insert into gl_interface (status,set_of_books_id,accounting_date, currency_code,date_created,
created_by, actual_flag, user_je_category_name, user_je_source_name, segment1,segment2,segment3,
entered_dr, entered_cr,transaction_date,reference1 )
values ('NEW', '1609', sysdate, 'FRF', sysdate,1008009, 'A', 'xx jab payroll', 'xx jab payroll', '01','002','4004',111.11,0,
sysdate,'xx_finance_jab_demo');
insert into gl_interface (status,set_of_books_id,accounting_date, currency_code,date_created,
created_by, actual_flag, user_je_category_name, user_je_source_name, segment1,segment2,segment3,
entered_dr, entered_cr,transaction_date,reference1 )
values ('NEW', '1609', sysdate, 'FRF', sysdate,1008009, 'A', 'xx jab payroll', 'xx jab payroll', '01','002','1005',0,111.11,
sysdate,'xx_finance_jab_demo');
------------> Oracle send me that message:
General Ledger: Version : 11.5.0 - Development
Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
GLLEZL module: Journal Import
Current system time is 14-MAR-2007 15:39:25
Running in Debug Mode
gllsob() 14-MAR-2007 15:39:25sob_id = 124
sob_name = Vision France
coa_id = 50569
num_segments = 6
delim = '.'
segments =
SEGMENT1
SEGMENT2
SEGMENT3
SEGMENT4
SEGMENT5
SEGMENT6
index segment is SEGMENT2
balancing segment is SEGMENT1
currency = EUR
sus_flag = Y
ic_flag = Y
latest_opened_encumbrance_year = 2006
pd_type = Month
<< gllsob() 14-MAR-2007 15:39:25
gllsys() 14-MAR-2007 15:39:25fnd_user_id = 1008009
fnd_user_name = JAB-DEVELOPPEUR
fnd_login_id = 2675718
con_request_id = 2918896
sus_on = 0
from_date =
to_date =
create_summary = 0
archive = 0
num_rec = 1000
num_flex = 2500
run_id = 55578
<< gllsys() 14-MAR-2007 15:39:25
SHRD0108: Retrieved 51 records from fnd_currencies
gllcsa() 14-MAR-2007 15:39:25<< gllcsa() 14-MAR-2007 15:39:25
gllcnt() 14-MAR-2007 15:39:25SHRD0118: Updated 1 record(s) in table: gl_interface_control
source name = xx jab payroll
group id = -1
LEZL0001: Found 1 sources to process.
glluch() 14-MAR-2007 15:39:25<< glluch() 14-MAR-2007 15:39:25
gl_import_hook_pkg.pre_module_hook() 14-MAR-2007 15:39:26<< gl_import_hook_pkg.pre_module_hook() 14-MAR-2007 15:39:26
glusbe() 14-MAR-2007 15:39:26<< glusbe() 14-MAR-2007 15:39:26
<< gllcnt() 14-MAR-2007 15:39:26
gllpst() 14-MAR-2007 15:39:26SHRD0108: Retrieved 110 records from gl_period_statuses
<< gllpst() 14-MAR-2007 15:39:27
glldat() 14-MAR-2007 15:39:27Successfully built decode fragment for period_name and period_year
gllbud() 14-MAR-2007 15:39:27SHRD0108: Retrieved 10 records from the budget tables
<< gllbud() 14-MAR-2007 15:39:27
gllenc() 14-MAR-2007 15:39:27SHRD0108: Retrieved 15 records from gl_encumbrance_types
<< gllenc() 14-MAR-2007 15:39:27
glldlc() 14-MAR-2007 15:39:27<< glldlc() 14-MAR-2007 15:39:27
gllcvr() 14-MAR-2007 15:39:27SHRD0108: Retrieved 6 records from gl_daily_conversion_types
<< gllcvr() 14-MAR-2007 15:39:27
gllfss() 14-MAR-2007 15:39:27LEZL0005: Successfully finished building dynamic SQL statement.
<< gllfss() 14-MAR-2007 15:39:27
gllcje() 14-MAR-2007 15:39:27main_stmt:
select int.rowid
decode(int.SEGMENT1|| int.SEGMENT2|| int.SEGMENT3|| int.SEGMENT4|| int.SEGMENT5|| int.SEGMENT6
, '', replace(ccid_cc.SEGMENT2,'.','
') || '.' || replace(ccid_cc.SEGMENT1,'.','
') || '.' || replace(ccid_cc.SEGMENT3,'.','
') || '.' || replace(ccid_cc.SEGMENT4,'.','
') || '.' || replace(ccid_cc.SEGMENT5,'.','
') || '.' || replace(ccid_cc.SEGMENT6,'.','
, replace(int.SEGMENT2,'.','
') || '.' || replace(int.SEGMENT1,'.','
') || '.' || replace(int.SEGMENT3,'.','
') || '.' || replace(int.SEGMENT4,'.','
') || '.' || replace(int.SEGMENT5,'.','
') || '.' || replace(int.SEGMENT6,'.','
') ) flexfield , nvl(flex_cc.code_combination_id,
nvl(int.code_combination_id, -4))
, decode(int.SEGMENT1|| int.SEGMENT2|| int.SEGMENT3|| int.SEGMENT4|| int.SEGMENT5|| int.SEGMENT6
, '', decode(ccid_cc.code_combination_id,
null, decode(int.code_combination_id, null, -4, -5),
decode(sign(nvl(ccid_cc.start_date_active, int.accounting_date-1)
- int.accounting_date),
1, -1,
decode(sign(nvl(ccid_cc.end_date_active, int.accounting_date +1)
- int.accounting_date),
-1, -1, 0)) +
decode(ccid_cc.enabled_flag,
'N', -10, 0) +
decode(ccid_cc.summary_flag, 'Y', -100,
decode(int.actual_flag,
'B', decode(ccid_cc.detail_budgeting_allowed_flag,
'N', -100, 0),
decode(ccid_cc.detail_posting_allowed_flag,
'N', -100, 0)))),
decode(flex_cc.code_combination_id,
null, -4,
decode(sign(nvl(flex_cc.start_date_active, int.accounting_date-1)
- int.accounting_date),
1, -1,
decode(sign(nvl(flex_cc.end_date_active, int.accounting_date +1)
- int.accounting_date),
-1, -1, 0)) +
decode(flex_cc.enabled_flag,
'N', -10, 0) +
decode(flex_cc.summary_flag, 'Y', -100,
decode(int.actual_flag,
'B', decode(flex_cc.detail_budgeting_allowed_flag,
'N', -100, 0),
decode(flex_cc.detail_posting_allowed_flag,
'N', -100, 0)))))
, int.user_je_category_name
, int.user_je_category_name
, 'UNKNOWN' period_name
, decode(actual_flag, 'B'
, decode(period_name, NULL, '-1' ,period_name), nvl(period_name, '0')) period_name2
, currency_code
, decode(actual_flag
, 'A', actual_flag
, 'B', decode(budget_version_id
, 1210, actual_flag
, 1211, actual_flag
, 1212, actual_flag
, 1331, actual_flag
, 1657, actual_flag
, 1658, actual_flag
, NULL, '1', '6')
, 'E', decode(encumbrance_type_id
, 1000, actual_flag
, 1001, actual_flag
, 1022, actual_flag
, 1023, actual_flag
, 1024, actual_flag
, 1048, actual_flag
, 1049, actual_flag
, 1050, actual_flag
, 1025, actual_flag
, 999, actual_flag
, 1045, actual_flag
, 1046, actual_flag
, 1047, actual_flag
, 1068, actual_flag
, 1088, actual_flag
, NULL, '3', '4'), '5') actual_flag
, '0' exception_rate
, decode(currency_code
, 'EUR', 1
, 'STAT', 1
, decode(actual_flag, 'E', -8, 'B', 1
, decode(user_currency_conversion_type
, 'User', decode(currency_conversion_rate, NULL, -1, currency_conversion_rate)
,'Corporate',decode(currency_conversion_date,NULL,-2,-6)
,'Spot',decode(currency_conversion_date,NULL,-2,-6)
,'Reporting',decode(currency_conversion_date,NULL,-2,-6)
,'HRUK',decode(currency_conversion_date,NULL,-2,-6)
,'DALY',decode(currency_conversion_date,NULL,-2,-6)
,'HLI',decode(currency_conversion_date,NULL,-2,-6)
, NULL, decode(currency_conversion_rate,NULL,
decode(decode(nvl(to_char(entered_dr),'X'),'X',1,2),decode(nvl(to_char(accounted_dr),'X'),'X',1,2),
decode(decode(nvl(to_char(entered_cr),'X'),'X',1,2),decode(nvl(to_char(accounted_cr),'X'),'X',1,2),-20,-3),-3),-9),-9))) currency_conversion_rate
, to_number(to_char(nvl(int.currency_conversion_date, int.accounting_date), 'J'))
, decode(int.actual_flag
, 'A', decode(int.currency_code
, 'EUR', 'User'
, 'STAT', 'User'
, nvl(int.user_currency_conversion_type, 'User'))
, 'B', 'User', 'E', 'User'
, nvl(int.user_currency_conversion_type, 'User')) user_currency_conversion_type
, ltrim(rtrim(substrb(rtrim(substrb(int.reference1, 1, 50)) || ' ' || int.user_je_source_name || ' 2918896: ' || int.actual_flag || ' ' || int.group_id, 1, 100)))
, rtrim(substrb(nvl(rtrim(int.reference2), 'Journal Import ' || int.user_je_source_name || ' 2918896:'), 1, 240))
, ltrim(rtrim(substrb(rtrim(rtrim(substrb(int.reference4, 1, 25)) || ' ' || int.user_je_category_name || ' ' || int.currency_code || decode(int.actual_flag, 'E', ' ' || int.encumbrance_type_id, 'B', ' ' || int.budget_version_id, '') || ' ' || int.user_currency_conversion_type || ' ' || decode(int.user_currency_conversion_type, NULL, '', 'User', to_char(int.currency_conversion_rate), to_char(int.currency_conversion_date))) || ' ' || substrb(int.reference8, 1, 15) || int.originating_bal_seg_value, 1, 100)))
, rtrim(nvl(rtrim(int.reference5), 'Journal Import 2918896:'))
, rtrim(substrb(nvl(rtrim(int.reference6), 'Journal Import Created'), 1, 80))
, rtrim(decode(upper(substrb(nvl(rtrim(int.reference7), 'N'), 1, 1)),'Y','Y', 'N'))
, decode(upper(substrb(int.reference7, 1, 1)), 'Y', decode(rtrim(reference8), NULL, '-1', rtrim(substrb(reference8, 1, 15))), NULL)
, rtrim(upper(substrb(int.reference9, 1, 1)))
, rtrim(nvl(rtrim(int.reference10), nvl(to_char(int.subledger_doc_sequence_value), 'Journal Import Created')))
, int.entered_dr
, int.entered_cr
, to_number(to_char(int.accounting_date,'J'))
, to_char(int.accounting_date, 'YYYY/MM/DD')
, int.user_je_source_name
, nvl(int.encumbrance_type_id, -1)
, nvl(int.budget_version_id, -1)
, NULL
, int.stat_amount
, decode(int.actual_flag
, 'E', decode(int.currency_code, 'STAT', '1', '0'), '0')
, decode(int.actual_flag
, 'A', decode(int.budget_version_id
, NULL, decode(int.encumbrance_type_id, NULL, '0', '1')
, decode(int.encumbrance_type_id, NULL, '2', '3'))
, 'B', decode(int.encumbrance_type_id
, NULL, '0', '4')
, 'E', decode(int.budget_version_id
, NULL, '0', '5'), '0')
, int.accounted_dr
, int.accounted_cr
, nvl(int.group_id, -1)
, nvl(int.average_journal_flag, 'N')
, int.originating_bal_seg_value
from GL_INTERFACE int,
gl_code_combinations flex_cc,
gl_code_combinations ccid_cc
where int.set_of_books_id = 124
and int.status != 'PROCESSED'
and (int.user_je_source_name,nvl(int.group_id,-1)) in (('xx jab payroll', -1))
and flex_cc.SEGMENT1(+) = int.SEGMENT1
and flex_cc.SEGMENT2(+) = int.SEGMENT2
and flex_cc.SEGMENT3(+) = int.SEGMENT3
and flex_cc.SEGMENT4(+) = int.SEGMENT4
and flex_cc.SEGMENT5(+) = int.SEGMENT5
and flex_cc.SEGMENT6(+) = int.SEGMENT6
and flex_cc.chart_of_accounts_id(+) = 50569
and flex_cc.template_id(+) is NULL
and ccid_cc.code_combination_id(+) = int.code_combination_id
and ccid_cc.chart_of_accounts_id(+) = 50569
and ccid_cc.template_id(+) is NULL
order by decode(int.SEGMENT1|| int.SEGMENT2|| int.SEGMENT3|| int.SEGMENT4|| int.SEGMENT5|| int.SEGMENT6
, rpad(ccid_cc.SEGMENT2,30) || '.' || rpad(ccid_cc.SEGMENT1,30) || '.' || rpad(ccid_cc.SEGMENT3,30) || '.' || rpad(ccid_cc.SEGMENT4,30) || '.' || rpad(ccid_cc.SEGMENT5,30) || '.' || rpad(ccid_cc.SEGMENT6,30)
, rpad(int.SEGMENT2,30) || '.' || rpad(int.SEGMENT1,30) || '.' || rpad(int.SEGMENT3,30) || '.' || rpad(int.SEGMENT4,30) || '.' || rpad(int.SEGMENT5,30) || '.' || rpad(int.SEGMENT6,30)
) , int.entered_dr, int.accounted_dr, int.entered_cr, int.accounted_cr, int.accounting_date
control->len_mainsql = 16402
length of main_stmt = 7428
upd_stmt.arr:
update GL_INTERFACE
set status = :status
, status_description = :description
, je_batch_id = :batch_id
, je_header_id = :header_id
, je_line_num = :line_num
, code_combination_id = decode(:ccid, '-1', code_combination_id, :ccid)
, accounted_dr = :acc_dr
, accounted_cr = :acc_cr
, descr_flex_error_message = :descr_description
, request_id = to_number(:req_id)
where rowid = :row_id
upd_stmt.len: 394
ins_stmt.arr:
insert into gl_je_lines
( je_header_id, je_line_num, last_update_date, creation_date, last_updated_by, created_by , set_of_books_id, code_combination_id ,period_name, effective_date , status , entered_dr , entered_cr , accounted_dr , accounted_cr , reference_1 , reference_2
, reference_3 , reference_4 , reference_5 , reference_6 , reference_7 , reference_8 , reference_9 , reference_10 , description
, stat_amount , attribute1 , attribute2 , attribute3 , attribute4 , attribute5 , attribute6 ,attribute7 , attribute8
, attribute9 , attribute10 , attribute11 , attribute12 , attribute13 , attribute14, attribute15, attribute16, attribute17
, attribute18 , attribute19 , attribute20 , context , context2 , context3 , invoice_amount , invoice_date , invoice_identifier
, tax_code , no1 , ussgl_transaction_code , gl_sl_link_id , gl_sl_link_table , subledger_doc_sequence_id , subledger_doc_sequence_value
, jgzz_recon_ref , ignore_rate_flag)
SELECT
:je_header_id , :je_line_num , sysdate , sysdate , 1008009 , 1008009 , 124 , :ccid , :period_name
, decode(substr(:account_date, 1, 1), '-', trunc(sysdate), to_date(:account_date, 'YYYY/MM/DD'))
, 'U' , :entered_dr , :entered_cr , :accounted_dr , :accounted_cr
, reference21, reference22, reference23, reference24, reference25, reference26, reference27, reference28, reference29
, reference30, :description, :stat_amt, '' , '', '', '', '', '', '', '', '' , '', '', '', '', '', '', '', '', '', '', ''
, '', '', '', '', '', '', '', '', '', gl_sl_link_id
, gl_sl_link_table
, subledger_doc_sequence_id
, subledger_doc_sequence_value
, jgzz_recon_ref
, null
FROM GL_INTERFACE
where rowid = :row_id
ins_stmt.len: 1818
glluch() 14-MAR-2007 15:39:27<< glluch() 14-MAR-2007 15:39:27
LEZL0008: Found no interface records to process.
LEZL0009: Check SET_OF_BOOKS_ID, GROUP_ID, and USER_JE_SOURCE_NAME of interface records.
If no GROUP_ID is specified, then only data with no GROUP_ID will be retrieved. Note that most data
from the Oracle subledgers has a GROUP_ID, and will not be retrieved if no GROUP_ID is specified.
SHRD0119: Deleted 1 record(s) from gl_interface_control.
Start of log messages from FND_FILE
End of log messages from FND_FILE
Executing request completion options...
Finished executing request completion options.
No data was found in the GL_INTERFACE table.
Concurrent request completed
Current system time is 14-MAR-2007 15:39:27
---------------------------------------------------------------------------As per the error message said, you need to specify a group id.
as per documentation :
GROUP_ID: Enter a unique group number to distinguish import data within a
source. You can run Journal Import in parallel for the same source if you specify a
unique group number for each request.
For example if you put data for payables and receivables, you need to put different group id to separate payables and receivables data.
HTH
Maybe you are looking for
-
I have imac and ipad 3 and iphone 4 I want to sync ical across all three on icloud - at the moment I can do t but get loads of really annoying repeating entries but have no icloud on my system preferences on imac do I eed to upgrade my software (curr
-
Reset documents and data in yosemite
How does one reset data & documents in iCloud under Yosemite? I've used this support document (https://support.apple.com/kb/PH14669?)viewlocale=en_US&locale=en_US) under Mavericks, and the directions worked. But under Yosemite, all I see is a dialog
-
hi all, how can i stop with break point in sapscript. i know the way: utilities, activate debugger and a lot of f5 or f8 until i came to the place i need to stop. this is a very slaw way. it's something more quickly ? thanks, dany
-
Is there any way to set the default font applied to all new comments? Just curious.
-
I have some problem about wifi sync with iOS5
Hi~everbody~ I have some problem with wifi sync, and still don't know how to resolve it. Please Help Me~ First,I using Windows 7 64-bit, the latest iTunes 10.5, and iPod touch 4 with 5.0(9A334), and of course a WIFI AP in my room. Here is my major pr