Problem with SQL*Loader loading long description with carriage return

I'm trying to load new items into mtl_system_items_interface via a concurrent
program running the SQL*Loader. The load is erroring due to not finding a
delimeter - I'm guessing it's having problems with the long_description.
Here's my ctl file:
LOAD
INFILE 'create_prober_items.csv'
INTO TABLE MTL_SYSTEM_ITEMS_INTERFACE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(PROCESS_FLAG "TRIM(:PROCESS_FLAG)",
SET_PROCESS_ID "TRIM(:SET_PROCESS_ID)",
TRANSACTION_TYPE "TRIM(:TRANSACTION_TYPE)",
ORGANIZATION_ID "TRIM(:ORGANIZATION_ID)",
TEMPLATE_ID "TRIM(:TEMPLATE_ID)",
SEGMENT1 "TRIM(:SEGMENT1)",
SEGMENT2 "TRIM(:SEGMENT2)",
DESCRIPTION "TRIM(:DESCRIPTION)",
LONG_DESCRIPTION "TRIM(:LONG_DESCRIPTION)")
Here's a sample record from the csv file:
1,1,CREATE,0,546,03,B00-100289,PROBEHEAD PH100 COMPLETE/ VACUUM/COAX ,"- Linear
X axis, Y,Z pivots
- Movement range: X: 8mm, Y: 6mm, Z: 25mm
- Probe tip pressure adjustable contact
- Vacuum adapter
- With shielded arm
- Incl. separate miniature female HF plug
The long_description has to appear as:
- something
- something
It can't appear as:
-something-something
Here's the errors:
Record 1: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column
LONG_DESCRIPTION.
Logical record ended - second enclosure character not present
Record 2: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column
ORGANIZATION_ID.
Column not found before end of logical record (use TRAILING NULLCOLS)
I've asked for help on the Metalink forum and was advised to add trailing nullcols to the ctl so the ctl line now looks like:
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
I don't think this was right because now I'm getting:
Record 1: Rejected - Error on table "INV"."MTL_SYSTEM_ITEMS_INTERFACE", column LONG_DESCRIPTION.
Logical record ended - second enclosure character not present
Thanks for any help that may be offered.
-Tracy

LOAD
INFILE 'create_prober_items.csv'
CONTINUEIF LAST <> '"'
INTO TABLE MTL_SYSTEM_ITEMS_INTERFACE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
(PROCESS_FLAG "TRIM(:PROCESS_FLAG)",
SET_PROCESS_ID "TRIM(:SET_PROCESS_ID)",
TRANSACTION_TYPE "TRIM(:TRANSACTION_TYPE)",
ORGANIZATION_ID "TRIM(:ORGANIZATION_ID)",
TEMPLATE_ID "TRIM(:TEMPLATE_ID)",
SEGMENT1 "TRIM(:SEGMENT1)",
SEGMENT2 "TRIM(:SEGMENT2)",
DESCRIPTION "TRIM(:DESCRIPTION)",
LONG_DESCRIPTION "REPLACE (TRIM(:LONG_DESCRIPTION), '-', CHR(10) || '-')")

Similar Messages

  • Problem with "carriage Return" or "Line Feed" in a table

    Hello,
    I need help with the function Zeichen(), so called it in german, I'm not sure if it is char() in english.
    In Pages Version 4.0.1 (746) I've created a table in Pages with this function to make an "carriage Return" in a cell.
    Here an example: { ="Hello" & Zeichen(10) & "World" }, in the cell is now:
    | Hello |
    | World |
    But in Pages Version 4.0.3 (766) this function ZEICHEN() not allow the number 10 and other till 32 (I allready read the manual and understand this problem).
    Anybody an idea to make a "Carriage Return" or "Line feed" in a formular, in a cell, in a table, in Pages?
    Thanks
    Detlev Kormann

    kdetlev wrote:
    Your workaround will not be possible in my table, because there is no empty cell in the table, but I think it is also a good help too.
    *In fact I forgot that you are using a table in Pages.*
    Remaining on my first idea, In Numbers we may achieve the same goal using an auxiliary table with a single cell containing the needed line break.
    If this table is named "LineBreak",
    the formula will be:
    ="Hello"&LineBreak :: $A$1&"World"
    *In Pages, here is my workaround:*
    Enter the cell
    type a§b
    don't type the character § but ctrl + return.
    The cell will contain
    a
    b
    with the arrow, move before the a
    type =B2&"
    delete the original a
    move to the right
    delete the original b
    type "&C7
    Yvan KOENIG (VALLAURIS, France) mercredi 7 octobre 2009 17:20:04

  • Problem using SQL Loader with ODI

    Hi,
    I am having problems using SQL Loader with ODI. I am trying to fill an oracle table with data from a txt file. At first I had used "File to SQL" LKM, but due to the size of the source txt file (700MB), I decided to use "File to Oracle (SQLLDR)" LKM.
    The error that appears in myFile.txt.log is: "SQL*Loader-101: Invalid argument for username/password"
    I think that the problem could be in the definition of the data server (Physical architecutre in topology), because I have left blank Host, user and password.
    Is this the problem? What host and user should I use? With "File to SQL" works fine living this blank, but takes to much time.
    Thanks in advance

    I tried to use your code, but I couldn´t make it work (I don´t know Jython). I think the problem could be with the use of quotes
    Here is what I wrote:
    import os
    retVal = os.system(r'sqlldr control=E:\Public\TXTODI\PROFITA2/Profita2Final.txt.ctl log=E:\Public\TXTODI\PROFITA2/Profita2Final.txt.log userid=MYUSER/myPassword @ mySID')
    if retVal == 1 or retVal > 2:
    raise 'SQLLDR failed. Please check the for details '
    And the error message is:
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 5, in ?
    SQLLDR failed. Please check the for details
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.h.y(h.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)

  • How to load data with carriage return through DRM action script ?

    Hello,
    We are using DRM to manage Essbase metadata. These metadata contain a field for member formula.
    Currently it's a string data type property in DRM so we can't use carriage return and our formula are really hard to read.
    But DRM support other data type property : memo or formatted memo where we can use carriage return.
    Then in the export file, we have change the record delimiter to an other character than CRLF
    Our issue : we are regularly using action script to load new metadata => How to load data properties with carriage return using action script ? There is no option to change the record delimiter.
    Thanks!

    Hello Sandeep
    here what I want to do through action script : loading a formula that use more on than one line:
    Here, I write my formula using 4 lines but action script cannot load since one line = 1 record.
    ChangeProp|Version_name|Hier_name|Node_name|Formula|@round(
    qty*price
    *m05*fy13

  • Schema with carriage return

    I would get the batch file with details of individual jobs as the output schema. The xml file looks alright. However when opened in notepad does not come with carriage return included.
    I know that we can include child delimiter if it's an input flat file converted to xml schema, but no idea on output files.
    Is there anyway we can include the Carriage return line feed in the schema so that when opened in notepad it's readable.

    Assuming you're talking about the output XML rather than the schema; if you open the XML in Visual Studio press the following keys
    <crtl>KD and the XML will be formatted with indentation (hold the
    <crtl> key down while pressing both the K and
    D keys).
    If you want a simple command line utility to format XML for you, the following should do the trick (pass in the name of the XML file):
    static void Main(string[] args)
    XmlDocument document = new XmlDocument();
    document.Load(args[0]);
    XmlTextWriter writer = new XmlTextWriter(args[0], Encoding.UTF8);
    writer.Formatting = Formatting.Indented;
    document.Save(writer);
    writer.Flush();
    writer.Close();
    NOTE: this key sequence also formats source code.
    David Downing... If this answers your question, please Mark as the Answer. If this post is helpful, please vote as helpful.

  • Receiver File channel for XML files: with carriage return

    Hi all,
    we are using a receiver FILE channel to generate an XML file that is sent to an external partner.
    The XML file looks good in a parser (IExplorer). But in fact there are not carriage return / line feeds between the XML tags
    of the XML payload in the file.
    Our partner now requires the XML file in a more vertical structure which means: for every tag a separate line (like it is displayed in a parser)
    Does anybody know a more general way to convert to a vertical XML structure (so with carriage return line feed).
    There is one entry in the SDN dealing with this topic but suggesting using an UDF. I think this is a very specific way.
    I don't think it is a good way to change/enhance the message mapping just because of a general formating change.
    Is it better to use an XSLT mapping as a second step in the interface mapping or a JAVA adapter module to convert ?
    any experiences? suggestions? examples?
    Thank you very much
    best regards
    Hans
    examples:
    original by XI receiver FILE adapter
    <?xml version="1.0" encoding="UTF-8"?>
    <MT_batchStatus><type>BS</type><header><message><messageSender>SENDER</messageSender><messageDate>20090723143720</messageDate> ... and so on
    required:
    <?xml version="1.0" encoding="UTF-8"?>
    <MT_batchStatus>
    <type>BS</type>
    <header>
    <message>
    <messageSender>SENDER</messageSender>
    <messageDate>20090723143720</messageDate>
    ... and so on

    >
    Hans Georg Walter wrote:
    > Is it better to use an XSLT mapping as a second step in the interface mapping or a JAVA adapter module to convert ?
    > any experiences? suggestions? examples?
    In such a case, the best is to write an generic XSLT or Java mapping that will attempt to do the pretty printing/formatting of the xml.
    The advantage of a generic one is that you can reuse the same class/jar for many other scenarios.
    so the flow will be as below in your interface mapping;
    1. your specific source to target mapping
    2. the generic formatting class

  • Problems using SQL*Loader with Oracle SQL Developer

    I have been using TOAD and able to import large (milllions of rows of data) in various file formats into a table in an Oracle database. My company recently decided not to renew any more TOAD licenses and go with Oracle SQL Developer. The Oracle database is on a corporate server and I access the database via Oracle client locally on my machine. Oracle SQL Developer and TOAD are local on my desktop and connected through TNSnames using the Windows XP platform. I have no issues with using SQL*Loader via the import wizard in TOAD to import the data in these large files into an Oracle table and producing a log file. Loading the same files via SQL*Loader in SQL Developer, freezes up my machine and I cannot get it to produce a log file. Please help!

    I am using SQL Developer version 3.0.04. Yes, I have tried it with a smaller file with no success. What is odd is that the log file is not even created. What is created is a .bat file a control file and a .sh file but no log file. The steps that I take:
    1.Right click on the table I want to import to or go to actions
    2. Import Data
    3. Find file to import
    4. Data Preview - All fields entered according to file
    5. Import Method - SQL Loader utility
    6. Column Definitions - Mapped
    7. Options - Directory of files set
    8. Finish
    With the above steps I was not able to import 255 rows of data. No log file was produced so I don't know why it is failing.
    thanks.
    Edited by: user3261987 on Apr 16, 2012 1:23 PM

  • Problem using SQL-LOADER and Unique Identifiers

    I'm trying to load a fixed-length records file containing people names and phone numbers. Data is specified as follows
    Toni Tomas66666666669999999999
    Jose Luis 33333333330000000000
    Notice that a maximum of 2 numbers can follow a person name, and 0000000000 means "no number specified".
    I want to assign a unique identifier to people (instead of using the NAME field as a Primary Key) using an Oracle Sequence. I did that, but I don't know
    how to assign the same id to each number.
    Considering the 2 previous lines, desired result should be:
    PEOPLE
    ======
    1     Toni Tomas
    2     Jose Luis
    TEL_NUMBERS
    ===========
    1     6666666666
    1     9999999999
    2     3333333333
    In order to achieve that, my Control File looks like this
    LOAD DATA
    INFILE phonenumbers.txt
    INTO TABLE people
         personID "mySequenceName.nextval", --an Oracle sequence
         name POSITION(1:10) CHAR
    INTO TABLE tel_numbers
    WHEN phonenumber !='0000000000'
         personID !!!DON'T KNOW HOW TO REFERENCE THE SAME ID!!!!
         phonenumber POSITION(11:20) CHAR
    INTO TABLE tel_numbers
    WHEN phonenumber !='0000000000'
         personID !!!DON'T KNOW HOW TO REFERENCE THE SAME ID!!!!
         phonenumber POSITION(21:30) CHAR
    I tried lots of things, but anyone works:
    a) reference the ID using something like ":\"people.personID\" (or similar aproaches)
    b) using a BEFORE INSERT TRIGGER getting the CURRVAL value of the Sequence. This solution
    does not work because it seems that all people is loaded before any telephone number. Hence,
    all phone numbers are associated, wrongly, to the last person in the data file.
    Does anyone know how can I solve this issue?
    Help would be appreciated. Thank you.

    Hi V Garcia.
    Information within the file is correct. Each line represents a COMPLETE record (Part of the line represents parent information and the rest is children data). As you can see in my first message, you can have more than one detail for a given master (i.e. two phone numbers):
    Toni Tomas66666666669999999999
    (10 chars for the name, 10 for each phone number. Thus, 2 children records to be created)
    With the solution given by Sreekanth Reddy Bandi (use of CURRVALUE within the SQL-Loader Control File), not all the details are linked to the parent record on the DB tables. It seems SLQ-Loader gets crazy when there is such amount of information.

  • Problem in SQL Loader

    Hi Experts,
    i'm using SQL Loader for loading data from an XML file into the DB , my control file was some thing like that :
    load data
    infile 'D:data.xml' "str '</dataNode>'"
    replace
    into table MY_TABLE
    where dataNode is the records' separator, this was working fine, what i'm trying to do now is writing all the parameters to a PARFILE , then passing only the PARFILE to the sql loader like the following :
    sqlldr PARFILE=myParaFile.par
    where myParaFile looks like that :
    userid=xxx/xxx
    control=xxx.ctl
    log=xxx.log
    data=D:\data.xml
    the problem now that i have removed the INFILE clause from the control file , and i have put the "data" parameter insetad on the PARFILE , the question now is Where shall i write "str '</dataNode>'" to tell the SQL Loader that the input data is in stream format and use </dataNode> as records' separator
    I really appreciate your help.

    My XML File:
    <dataNode>
    <ProductID>1</ProductID>
    <Type>Phone</Type>
    </dataNode>
    <dataNode>
    <ProductID>2</ProductID>
    <Type>Sim</Type>
    </dataNode>
    My Control File :
    load data
    infile 'D:data.xml' "str '</dataNode>'"
    replace
    into table MY_TABLE
    dummy filler terminated by "<dataNode>",
    ProductID enclosed by "<ProductID>" and "</ProductID>",
    Type enclosed by "<Type>" and "</Type>"
    )

  • Problem specifying SQL Loader Log file destination using EM

    Good evening,
    I am following the example given in the 2 Day DBA document chapter 8 section 16.
    In step 5 of 7, EM does not allow me to specify the destination of the SQL Loader log file to be on a mapped network drive.
    The question: Does SQL Loader have a limitation that I am not aware of, that prevents placing the log file on a network share or am I getting this error because of something else I am inadvertently doing wrong ?
    Note: I have placed the DDL, load file data and steps I follow in EM at the bottom of this post to facilitate reproducing the problem *(drive Z is a mapped drive)*.
    Thank you for your help,
    John.
    DDL (generated using SQL developer, you may want to change the space allocated to be less)
    CREATE TABLE "NICK"."PURCHASE_ORDERS"
        "PO_NUMBER"      NUMBER NOT NULL ENABLE,
        "PO_DESCRIPTION" VARCHAR2(200 BYTE),
        "PO_DATE" DATE NOT NULL ENABLE,
        "PO_VENDOR" NUMBER NOT NULL ENABLE,
        "PO_DATE_RECEIVED" DATE,
        PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
      SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 67108864
      TABLESPACE "USERS" ;
    Load.dat file contents
    1, Office Equipment, 25-MAY-2006, 1201, 13-JUN-2006
    2, Computer System, 18-JUN-2006, 1201, 27-JUN-2006
    3, Travel Expense, 26-JUN-2006, 1340, 11-JUL-2006
    Steps I am carrying out in EM
    log in, select data movement -> Load Data from User Files
    Automatically generate control file
    (enter host credentials that work on your machine)
    continue
    Step 1 of 7 ->
      Data file is located on your browser machine
      "Z:\Documentation\Oracle\2DayDBA\Scripts\Load.dat"
       click next
    step 2 of 7 ->
      Table Name
      nick.purchase_orders
      click next
    step 3 of 7 ->
      click next
    step 4 of 7 ->
      click next
    step 5 of 7 ->
      Generate log file where logging information is to be stored
      Z:\Documentation\Oracle\2DayDBA\Scripts\Load.LOG
      Validation Error
      Examine and correct the following errors, then retry the operation:
      LogFile - The directory does not exist.

    Hi John,
    But, i did'nt found any error when i am going the same what you did.
    My Oracle Version is 10.2.0.1 and using Windows xp. See what i did and i got worked
    1.I created one table in scott schema :
    SCOTT@orcl> CREATE TABLE "PURCHASE_ORDERS"
      2  (
      3      "PO_NUMBER"      NUMBER NOT NULL ENABLE,
      4      "PO_DESCRIPTION" VARCHAR2(200 BYTE),
      5      "PO_DATE" DATE NOT NULL ENABLE,
      6      "PO_VENDOR" NUMBER NOT NULL ENABLE,
      7      "PO_DATE_RECEIVED" DATE,
      8      PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
      9  )
    10  TABLESPACE "USERS";
    Table created.I logged into em Maintenance-->Data Movement-->Load Data from User Files-->My Host Credentials
    Here i total 3 text boxes :
    1.Server Data File : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
    2.Data File is Located on Your Browser Machine : z:\load.dat <--- Here z:\ means other machine's shared doc folder; and i selected this option (as option button click) and i created the same load.dat as you mentioned.
    3.Temporary File Location : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\ <--- I did'nt mentioned anything.
    Step 2 of 7 Table Name : scott.PURCHASE_ORDERS
    Step 3 of 7 I just clicked Next
    Step 4 of 7 I just clicked Next
    Step 5 of 7 I just clicked Next
    Step 6 of 7 I just clicked Next
    Step 7 of 7 Here it is Control File Contents:
    LOAD DATA
    APPEND
    INTO TABLE scott.PURCHASE_ORDERS
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
    PO_NUMBER INTEGER EXTERNAL,
    PO_DESCRIPTION CHAR,
    PO_DATE DATE,
    PO_VENDOR INTEGER EXTERNAL,
    PO_DATE_RECEIVED DATE
    And i just clicked on submit job.
    Now i got all 3 rows in purchase_orders :
    SCOTT@orcl> select count(*) from purchase_orders;
      COUNT(*)
             3So, there is no bug, it worked and please retry if you get any error/issue.
    HTH
    Girish Sharma

  • A problem about sql*load

    how load LOB Data into database?
    I already saw oracle8i documents about sql*load. in general, lob fields in samples are less than 4000
    How load data >4000 into lob ?
    null

    I already knew the solution of the problem,but I met another problem.
    I checked my sql 7 database again, I found large columns are ntext datatype. I changed the ntext to text and migrated data again. In SQL*PLUS ,I found the data of clob columns change to '????.?-?'. I know this is caused by NLS_LANG, my NLS_LANG=simplified chinese_china.zhs16gbk. How do i set my nls_lang to get the right data?
    null

  • Pecuilar problem in Sql*Loader

    Hi,
    I had a sql loader control file which was used to load tables into multiple tables.
    Actually the format of the is something like
    #10#......... <<header record>>
    #20#...... <<body record>>
    #20# <<body record>>
    #EOF# <<it marks end of file>>
    Control file is as follows
    LOAD DATA
    INFILE "C:\WINDOWS\system32\multi1.txt"
    APPEND
    INTO TABLE BROADCAST_HEADER
    WHEN (1:4)='#10#'
    (tag_number "broadcast_header.nextval",
    processing_flag constant 'N'
    ,A POSITION (5:24) Char
    ,B POSITION (26:35) Char
    ,C (37:46) Char
    ,D (48:62) INTEGER EXTERNAL
    ,E (64:78) Char
    INTO TABLE BROADCAST_BODY
    WHEN (1:4)='#20#'
    (P "broadcast_body.nextval",
    Q "broadcast_header.currval",
    R POSITION (5:19) Char
    ,S POSITION (21:45) Char
    ,T POSITION (47:71) Char "decode(:T,null,'NULL',:T)"
    ,U POSITION (73:78) Char
    ,V POSITION (80:85) Char
    INTO TABLE BROADCAST_DRIVER
    WHEN (1:5)='#EOF#'
    RECORD_NUMBER "broadcast_header.currval"
    So while loading the file format as mentioned above,all header and body records get inserted properly but its not loading data in broadcast_driver table giving following error.
    Record 21: Discarded - all columns null.
    Its giving error on line containing #EOF#.
    So please help me out.
    Regards,
    Sandeep Saxena

    My XML File:
    <dataNode>
    <ProductID>1</ProductID>
    <Type>Phone</Type>
    </dataNode>
    <dataNode>
    <ProductID>2</ProductID>
    <Type>Sim</Type>
    </dataNode>
    My Control File :
    load data
    infile 'D:data.xml' "str '</dataNode>'"
    replace
    into table MY_TABLE
    dummy filler terminated by "<dataNode>",
    ProductID enclosed by "<ProductID>" and "</ProductID>",
    Type enclosed by "<Type>" and "</Type>"
    )

  • Problem with sdo_relate returning unexpected results

    I am having a problem with an oracle spatial query not returning what I feel is an appropriate result.
    I have a bounding box that has 6 points from a table that should be inside it. There are several hundred points total in this table. I perform the following query and oracle returns only 4 points, well 5 but we will get to that later, within the area of the search.
    SQL> Select feature_id
    From city_points A
    where (MDSYS.SDO_RELATE(A.GEOM, mdsys.sdo_geometry(2003, 8307, NULL, mdsys.sdo_elem_info_array(1,1003,1), mdsys.sdo_ordinate_array(-101.8417,-52.8083,-23.8417,-52.8083,-23.8417,-13.8083,-101.8417,-13.8083,101.8417,-52.8083)), 'mask=ANYINTERACT querytype=join') = 'TRUE');
    I used a different application to perform the same type of query. The difference is that the application does the spatial query, not oracle. Further the application gets all of the points and performs this query on the client. What the application returns is correct both visually and spatially.
    Two of the points not returned in the oracle spatial query are 300km inside the bounding box. This far exceeds the .5 m tolerance used in our decimal degrees data set (SRID 8307).
    I have experienced this problem on 9.2.0.1. I then patched that instance to 9.2.0.7 and duplicated the problem. I then exported the data and imported into a 10g release 1 database and again duplicated the problem. I have tried to re-index the data to no avail.
    I have also tried different querytypes also yielding less than expected results.
    The data looks like this:
    243
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-58.45, -34.6, NULL),NULL, NULL)
    254
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-56.18333, -34.883334, NULL), NULL, NULL)
    377
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-70.666671, -33.449999, NULL), NULL, NULL)
    385
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-68.149999, -16.5, NULL), NULL, NULL)
    388
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-47.916667, -15.783333, NULL), NULL, NULL)
    427
    SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-57.640066, -25.270295, NULL), NULL, NULL)
    The number is the id field, the rest is the geometry.
    The oracle spatial query above only returns id #s 359, 377,243,254,427
    There are five returned records. The extra value is outside the bounding area. So it should not have been returned at all. It is all too strange.
    I have seen this with different geometry types (points lines and area) as well.
    If anyone has suggestions, I would appreciate your comments.
    Thanks,
    John

    John,,
    What you are seeing is the behavior you should expect in the geodetic space.
    When you have a very long line connecting from longitude -101 to -23, that line
    does not follow the same latitude value.
    Since these points are in southern hemisphere, the line connecting them
    will curve downward (this is the great circle line).
    If you really want a line connecting with constant latitude, you should
    use the MBR type for the window geometry which densifies the
    lines along constant latitude before passing it into relate.
    SDO_GEOMETRY(2003, 8307, NULL,
    sdo_elem_info_array(1,1003,3),
    sdo_ordinate_array(-101.8417,-52.8083, 23.8417,-13.8083))
    siva

  • Problems with a return

    I ordered a phone and added a line for my son on Dec 20. I had problems with my order online, and chatted with an associate who helped me. At one point I had to log out, log back in and re-start my order.
    Three days later I received two phones and checked my account which said I had added two lines. I called Verizon and was told I needed to take the extra phone to the store, and that the extra line would not be charged as long as I was within 14 days.
    Because of Christmas and travel, I could not take the phone back until today. I went to the store, where I was told I could not return it there, and to go home and print a label from home.
    I logged in and tried to print the label, but my pop up blocker prevented it. I tried to reprint it, and could not get the blocker off, so I called Customer Service.
    Customer Service told me to go BACK to the Verizon store and have them print the label for me.
    I am extremely frustrated. I am concerned that the phone will not get back to Verizon and I will be charged. I only want ONE phone and ONE line for my son.
    Has anyone else dealt with this???

    Well that's odd.  If you call customer service, they can send you a label via mail but for something more immediate you may need to disable your pop-up blocker in your internet options.  If you google it and put it what browser you have, there may be a video turtorial.  Once your blocker is disabled go to http://www.verizonwireless.com/printlabel/, sign into your myverizon account and as long as you haven't printed 1 off you should be able to print it off. 

  • Problem with subqueries returning TIMESTAMP's

    I've got problems with some subqueries returning TIMESTAMP values.
    I've build a case for easy reproduction, with this query:
    SELECT iil.iil_lot_num ,
         (SELECT max ( lot_data_validade )
         FROM lot
         WHERE lot.lot_mat_cod = iil.iil_mat_cod AND
               lot.lot_num = iil.iil_lot_num ) as validade
    FROM iil
    WHERE ( iil.iil_inv_serie = 109 ) and
          ( iil. iil_inv_num = 16 ) and
          ( iil.iil_mat_cod = 111 )
    When I run it, it gives the error:
    General error;-9999 POS(1) System error: invalid_indexorder
    knldiag also gives:
    2009-07-28 15:03:18  9943 ERR 51080 SYSERROR -9999 invalid_indexorder
    I'm running on 7.6.06.03.
    Previous versions were giving another message:
    SQLCODE:  -9999      System error: Otherwise unknown errorcode
    If I change max ( lot_data_validade ) to  max ( year (lot_data_validade) ), it works. The problem is specifically with fetching that timestamp value from the subquery.
    Backup for reproduction of the problem is on:
    http://www.tecnova.com.br/maxdb/backup_problem
    (i've isolated only the tables involved in the problem, so the backup is tiny -> 4.3MB)

    HI Elke,
    I had the same idea at first but was largely disappointed to find that this 'corrupt index' is indeed a temp. resultset:
    --> max() query
    OWNER         TABLENAME           COLUMN_OR_INDEX     STRATEGY                                  PAGECOUNT
    DBA           IIL                                     RANGE CONDITION FOR KEY                           61
                                      IIL_INV_SERIE            (USED KEY COLUMN)                             
                                      IIL_INV_NUM              (USED KEY COLUMN)                             
                                      IIL_MAT_COD              (USED KEY COLUMN)                             
    INTERNAL      TEMPORARY RESULT                        EQUAL CONDITION FOR KEY                            1
                                      IIL_INV_SERIE            (USED KEY COLUMN)                             
    DBA           LOT                                     NO STRATEGY NOW (ONLY AT EXECUTION TIME)           
    INTERNAL      TEMPORARY RESULT                        TABLE SCAN                                         1
                                                               RESULT IS COPIED   , COSTVALUE IS       > 2 E10
    The cost expectation is really awesome here...
    --> max(year()) query
    OWNER     TABLENAME         COLUMN_OR_INDEX  STRATEGY                                  PAGECOUNT
    DBA       IIL                                RANGE CONDITION FOR KEY                           61
                                IIL_INV_SERIE         (USED KEY COLUMN)                   
                                IIL_INV_NUM           (USED KEY COLUMN)                   
                                IIL_MAT_COD           (USED KEY COLUMN)                   
    INTERNAL  TEMPORARY RESULT                   EQUAL CONDITION FOR KEY                            1
                                IIL_INV_SERIE         (USED KEY COLUMN)                   
    DBA       LOT                                NO STRATEGY NOW (ONLY AT EXECUTION TIME) 
                                                      RESULT IS COPIED   , COSTVALUE IS   
    This time the optimizer ran out of words for the expected effort for this query execution...
    The problem really seems to be the subquery-select as we can strip down the example to
    select (select max(lot_data_validade) maxval
    from lot
    where lot.lot_mat_cod=111) as maxval
    from dual
    and still see the error...
    regards,
    Lars

Maybe you are looking for