Unable to Query Table Created Via Load/Unload Utility

I just created a table by doing a data load of an tab-delimited file via the Load/Upload Utility. The table was successfully created, and I can view it on the Object Browser, but I cannot do SQL queries on it, either via SQL*Plus or via the SQL Commands page. What's going on?

I got no error message at all from Firefox when I did the data Load, so it took me
a while to suspect that the browser compatibility was at fault.Moreover, you posted in your first post in this thread that you saw table in Object Browser!
Sounds as APEX bug to me...

Similar Messages

  • Unable to Query Table

    Hi Folks,
    I'm stuck in weird situation. User complaints fetching data from table A is timing-out ( as per application standards). I decided to collect latest statistics(dbms_stats) on index used by query which is running for more than 12 hours. Ideally which should not.It is partition table and got close to 10 millions records.
    1) I checked dba_waiters - no rows
    2) I checked v$locked_object/v$locks - no rows
    3) I'm unable to run explain-plan for any criteria on that table
    4) I'm unable to open table data-tab in Toad/ Sql developer.
    5) I'm unable to query a single partition; no error but it keeps on running.
    SQL> SELECT * FROM SSAM_TRANS_FACT PARTITION (P20101010S01);

    Again, anything in alert.log? Any trace files? What happens if you issue:
    select 1 from ssam_trans_fact where rownum = 1;What happens when you issue:
    select 1 from ssam_trans_fact partition(P20101010S01) where rownum = 1;SY.

  • Tables created via SQL...

    There is a way to show information stored in a table creadted via SQL in the SBO Print Forms ?

    Hello!
    yes, this will work as such, but might not give you the
    correct values as there is no link between the table and the layout! Usually, it pulls simply the first value in the table.
    Or, you might be lucky that it works now, but not after an upgrade.
    Be careful when adding field with "ALT", if you encounter problems at any stage, you will not receive any support!
    When adding fields via "ALT", check if you can create manual links between the table and the layout via "next segment" and "link to".
    Kind regards

  • Frm- 40505 Unable to query (table does not exist)

    Hey,
    I have a form which has two blocks that are master and detail blocks. On the form, there are 3 fields that allow the query. If there is record found in the master then it works fine. If nothing found, it shows "query causes no records" which is also fine. But afterwards, no matter what I type in to query, it always show frm-40505. Oracle error, unable to perform query. I click "display error", it shows:
    SELECT ROWID,LOC,INV_NUMBER,RECNO,FACTOR,C_DESC,ADJ_TYPE,CODE,AMT FROM OE_INVOICE_ADJ
    WHERE (LOC=:1) and (INV_NUMBER=:2) order by recno
    frm - 09402:Table or view does not exist.
    Here oe_invoice_adj is the detail table and it does exist. If I stop the form and run it again then it's fine. It only happens after "query causes no records".
    Any ideas? thanks a lot in advance.

    Jay, Have you created public synonym for the tables so that the users don't have to refer the tables with the schema scott.xtable...
    and have the users been given the SELECT privilege to those tables....
    mb
    I actually created the tables and views for my application by logging in as scott/tiger.Now the users I have created with due privileges are just able to log in but unable to perform any query or DML.The error message is FRM 40505'UNABLE TO PERFORM QUERY' which says table or view does not exist.How do I fix this problem?
    Please get me a solution
    Thanks
    Jay

  • Errors with querying a query table using non-QoQ query and QoQ query

    HELP ! ! !  Going into testing soon. I need this to work to get correct report results ! ! ! !
    My issue seems similar to the one just answered about Query of Queries RunTime Error
    and the reply from lawhite01 caught my eye.
    This is a 2 parter. The second part is the QoQ part, but the 1st part has a line in the query that is similar to the QoQ one and it uses the same data. Part 1 also throws an error.
    PART # 1.
    I'm trying to use a query table created through QueryNew and then query it.
    I need multiple columns in the query table I create:
    <cfscript>
            tot_AllCurrentDraftListing = QueryNew("AnnounceNum, JP_PDLoc, JP_JS_Title, JP_JS, JP_KW_1, JP_JobTitle, JP_Open, JP_Close, JP_CloseType, JP_CloseName, JP_PosNeed, JP_DirectHire, JP_Desc, JP_Draft, JP_Archived, JP_State, JP_AreaName, JP_AreaID, JP_AreaAlias, JP_Fac_SU, JP_Fac_Facility, JP_FAC_ID, JP_Grade1, JP_sal_low1, JP_sal_high1, JP_Grade2, JP_sal_low2, JP_sal_high2, JP_Grade3, JP_sal_low3, JP_sal_high3, JP_Grade4, JP_sal_low4, JP_sal_high4, JP_Grade5, JP_sal_low5, JP_sal_high5, JP_Posted, JP_TypeHire, JP_HRemail");
        </cfscript>
    Then I populate all the cells of the query table.
    Then I set up to use the created query table.
    I do this first:
        <cfquery name="qAltPostID" datasource="#at_datasource#">
             SELECT AltPoster, fk_Job_AnnounceNum
             from JOB_JPContacts
             Where AltJPContactType = 'AltPosterID'
             and AltPoster = '#session.IHSUID#'
             </cfquery>
    Then, in my first query using the created query, I expect to need to choose from multiple values, so I'm using this line in the query (this is NOT a QoQ query):
                and AnnounceNum IN (<cfqueryparam cfsqltype="CF_SQL_varchar" value="#ValueList(qAltPostID.fk_Job_AnnounceNum)#">)
    I've also tried:
                   and AnnounceNum IN (#ValueList(qAltPostID.fk_Job_AnnounceNum)#)   
    and:
                   and JOB_AnnounceNum IN
                    SELECT fk_Job_AnnounceNum
                    from JOB_JPContacts
                    Where AltJPContactType = 'AltPosterID'
                    and AltPoster = '#session.IHSUID#'
    ERROR is: one record should return. I get 0.
    PART # 2: Here's the QoQ part.
    I get the error:
    Query Of Queries runtime error.
    Comparison exception while executing IN.
    Unsupported Type Comparison Exception: The IN operator does not support comparison between the following types:
    Left hand side expression type = "LONG".
    Right hand side expression type = "STRING".
    A tutorial I found gave an example using only one column for this part of the fix:
         tot_AllCurrentDraftListing = QueryNew("AnnounceNum", "CF_SQL_VARCHAR")
    How would I set up the query with the datatype when I'm using multiple columns:
    <cfscript>
            tot_AllCurrentDraftListing = QueryNew("AnnounceNum, JP_PDLoc, JP_JS_Title, JP_JS, JP_KW_1, JP_JobTitle, JP_Open, JP_Close, JP_CloseType, JP_CloseName, JP_PosNeed, JP_DirectHire, JP_Desc, JP_Draft, JP_Archived, JP_State, JP_AreaName, JP_AreaID, JP_AreaAlias, JP_Fac_SU, JP_Fac_Facility, JP_FAC_ID, JP_Grade1, JP_sal_low1, JP_sal_high1, JP_Grade2, JP_sal_low2, JP_sal_high2, JP_Grade3, JP_sal_low3, JP_sal_high3, JP_Grade4, JP_sal_low4, JP_sal_high4, JP_Grade5, JP_sal_low5, JP_sal_high5, JP_Posted, JP_TypeHire, JP_HRemail");
        </cfscript>
    I used this code after all the cells contained values and before running my QoQ query:
            <cfloop index="intID" from="1" to="#tot_AllCurrentDraftListing.recordcount#" step="1">
                <cfset tot_AllCurrentDraftListing["AnnounceNum"] [intID] = JavaCast("string", intID) />
            </cfloop>
              Is that correct?
    Thanks.
    Whoever can help me with this should be awarded extra points ! ! ! !

                and AnnounceNum IN (<cfqueryparam cfsqltype="CF_SQL_varchar" value="#ValueList(qAltPostID.fk_Job_AnnounceNum)#">)
    If you're passing a list as a param, you need to tell <cfqueryparam> it's a list.  Read:
    http://livedocs.adobe.com/coldfusion/8/htmldocs/Tags_p-q_18.html#1102474
    ERROR is: one record should return. I get 0.
    It's a bit hard to comment on this sort of thing without knowing the data involved.
    A tutorial I found gave an example using only one column for this part of the fix:     tot_AllCurrentDraftListing = QueryNew("AnnounceNum", "CF_SQL_VARCHAR")
    How would I set up the query with the datatype when I'm using multiple columns:
    Again, this is a matter of reading the relevant docs:
    http://livedocs.adobe.com/coldfusion/8/htmldocs/functions_m-r_19.html#292759
    As a general rule, if you're having trouble with the syntax of a CFML statement, look it up in the docs.
    Adam

  • Query on Creating and Populating I$ table on different condition

    Hi,
    I have a query on creating and populating I$ table on different condition.In which condition the I$ table is created??And These condition are mentioned below:
    1)*source and staging area* are on same server(i.e target is on another server)
    2)*staging area and Target* are on same server(i.e source is on another server)
    3)*source,staging area and Target* are on *3 different* server
    4)source,staging area and Target are on same server
    Thanks

    I am not very much clear about your question. Still trying my best to clear it out.
    In your all above requirement I$ table will be created.
    If staging same as target ( One database,one user) then all temp tables will be created under this user
    If staging is different than target ( One database,two user (A,B)) then all temp tables will be created under this user A (lets consider) and data will be inserted to the target table that is present in user B
    If staging is different than target ( Two database,two user (A1,A2), not recommended architecture) then all temp tables will be created under this user A1 (database A1) and data will be inserted to the target table that is present in user A2 (database A2)
    If source,staging,target will under one database then No LKM is required,IKM is sufficient to load the data into target. Specifically for this you can see one example given by Craig.
    http://s3.amazonaws.com/Ora/ODI-Simple_SELECT_and_INSERT-interface.swf
    Thanks.

  • Creating SQL-Loader script for more than one table at a time

    Hi,
    I am using OMWB 2.0.2.0.0 with Oracle 8.1.7 and Sybase 11.9.
    It looks like I can create SQL-Loader scripts for all the tables
    or for one table at a time. If I want to create SQL-Loader
    scripts for 5-6 tables, I have to either create script for all
    the tables and then delete the unwanted tables or create the
    scripts for one table at a time and then merge them.
    Is there a simple way to create migration scripts for more than
    one but not all tables at a time?
    Thanks,
    Prashant Rane

    No there is no multi-select for creating SQL-Loader scripts.
    You can either create them separately or create them all and
    then discard the one you do not need.

  • How to create olap cube using Named Query Table in Data source View

     I Create on OLAP Cube using Existing Tables Its Working Fine But When i Use Named Query Table with RelationShip To other Named query Table  It Not Working .So give me some deep Clarification On Olap Cube for Better Understanding
    Thanks

    Hi Pawan,
    What do you mean "It Not Working"? As Kamath said, please post the detail error message, so that we can make further analysis.
    In the Data Source View of a CUBE, we can define a named query. In a named query, you can specify an SQL expression to select rows and columns returned from one or more tables in one or more data sources. A named query is like any other table in a data source
    view (DSV) with rows and relationships, except that the named query is based on an expression.
    Reference:Define Named Queries in a Data Source View (Analysis Services)
    Regards,
    Charlie Liao
    TechNet Community Support

  • Query to create a view using the following tables

    Hi,
    I am struggling to write the proper query to create a view using three tables.
    I would really be thankful if anyone can help me.
    I am pasting the following script to create the required tables and also insert appropriate data in each of the table.
    create table HR.STUDENT_DETAILS (
    STUDENT_ID NUMBER(10),
    STUDENT_NAME VARCHAR2(50),
    DOB DATE,
    SEX CHAR(1),
    ACTIVE CHAR(1),
    CONTACT_NUMBER NUMBER(20),
    primary key(STUDENT_ID)
    create table HR.PAYMENT_HEADER (
    PAY_HEADER_ID NUMBER(10),
    DOC_NUM NUMBER(5),
    MONTH_NAME VARCHAR2(10),
    primary key(PAY_HEADER_ID)
    create table HR.PAYMENT_DETAILS (
    HEADER_ID NUMBER(10),
    LINE_ID NUMBER(10),
    STUDENT_ID NUMBER(10),
    CUM_AMOUNT NUMBER(10),
    primary key(HEADER_ID,LINE_ID)
    INSERT ALL
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1112,'James','17-JUN-05','M','Y',23674378)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1113,'David','21-SEP-05','M','Y',24565457)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1114,'Michael','13-JAN-06','M','Y',25436784)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1115,'Joseph','03-JAN-06','M','Y',23435673)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1116,'Juliet','21-MAY-05','F','Y',23234527)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1117,'Monica','25-JUN-05','F','Y',24873245)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1118,'William','05-FEB-05','M','Y',23623245)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1119,'Karen','07-FEB-06','F','Y',26757543)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1120,'Erika','17-AUG-05','F','Y',25435465)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1121,'Noah','16-AUG-05','M','Y',23457645)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1122,'Angelina','28-SEP-05','F','Y',26456787)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1123,'Gabriela','30-SEP-05','F','Y',29767543)
    INTO HR.STUDENT_DETAILS (STUDENT_ID,STUDENT_NAME,DOB,SEX,ACTIVE,CONTACT_NUMBER) VALUES (1124,'Sofia','07-MAR-06','F','Y',27656578)
    SELECT * FROM DUAL;
    INSERT ALL
    INTO HR.PAYMENT_HEADER (PAY_HEADER_ID,DOC_NUM,MONTH_NAME) VALUES (305,1,'JAN')
    INTO HR.PAYMENT_HEADER (PAY_HEADER_ID,DOC_NUM,MONTH_NAME) VALUES (306,2,'FEB')
    INTO HR.PAYMENT_HEADER (PAY_HEADER_ID,DOC_NUM,MONTH_NAME) VALUES (307,3,'MAR')
    INTO HR.PAYMENT_HEADER (PAY_HEADER_ID,DOC_NUM,MONTH_NAME) VALUES (308,4,'APR')
    INTO HR.PAYMENT_HEADER (PAY_HEADER_ID,DOC_NUM,MONTH_NAME) VALUES (309,5,'MAY')
    SELECT * FROM DUAL;
    INSERT ALL
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (305,12,1112,40)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (305,13,1113,40)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (305,14,1114,40)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (306,15,1112,80)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (306,16,1113,80)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (306,17,1114,80)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (306,18,1115,40)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (306,19,1116,40)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (307,20,1112,120)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (307,21,1113,120)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (307,22,1114,120)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (307,23,1115,80)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (307,24,1116,80)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (307,25,1117,40)
    INTO HR.PAYMENT_DETAILS (HEADER_ID,LINE_ID,STUDENT_ID,CUM_AMOUNT) VALUES (307,26,1118,40)
    SELECT * FROM DUAL;
    The above table STUDENT_DETAILS stores the details of all the students and each student has a unique student id.
    Another table PAYMENT_HEADER saves the payment details as a document for each month which has a header id.
    In the PAYMENT_DETAILS table, payment details are stored for the students (who made the payment for that month). This table does not save the data for a student if he has not paid on that month. This table is connected to the PAYMENT_HEADER table through a header Id.
    The view should be such that when I pass the document number, it shall show the names of all the students, but shall show payment amount only for those students who had paid under the selected document number, for other it should show null.
    I tried the following query:
    select * from (
    select sd.student_name,sd.DOB,sd.sex,sd.contact_number,pd.doc_num,pd.month_name,pd.cum_amount
    from hr.student_details sd left join
    (select hdr.doc_num,hdr.month_name,det.student_id,det.cum_amount
    from hr.payment_header hdr, hr.payment_details det
    where hdr.pay_header_id = det.header_id) pd on sd.student_id = pd.student_id)
    But when I pass the filtering values like document number, it does not show some students with null values.
    select * from (
    select sd.student_name,sd.DOB,sd.sex,sd.contact_number,pd.doc_num,pd.month_name,pd.cum_amount
    from hr.student_details sd left join
    (select hdr.doc_num,hdr.month_name,det.student_id,det.cum_amount
    from hr.payment_header hdr, hr.payment_details det
    where hdr.pay_header_id = det.header_id) pd on sd.student_id = pd.student_id)
    where doc_num = 1 or doc_num is null;
    My requirement is that, every time I use a document number in the where clause for the view it should show the paid amount against the student names who has paid on that month and for all other student it should null. Total student number is 13. So every time it should show 13 students only even when I pass document No 2 or 3.
    Seeking your help.
    Regards
    Hawker

    Hi Frank,
    as per your advice, I am omitting the 'DOB', 'GENDER','ACTIVE','CONTACT_NUMBER' fields from the 'STUDENT_DETAILS' table.
    I shall create separate tables and insert the desired output from the view for each where clause.
    First let us create three tables each for the desired out put:
    1) For the desired output I want to see when I pass doc_num as 1
    create table HR.SDT_PAY_DET_DOC_ONE (
    STUDENT_ID NUMBER(10),
    STUDENT_NAME VARCHAR2(50),
    DOC_NUM NUMBER(5),
    C_AMOUNT NUMBER(10),
    primary key(STUDENT_ID)
    2) For the desired output I want to see when I pass doc_num as 2
    create table HR.SDT_PAY_DET_DOC_TWO (
    STUDENT_ID NUMBER(10),
    STUDENT_NAME VARCHAR2(50),
    DOC_NUM NUMBER(5),
    C_AMOUNT NUMBER(10),
    primary key(STUDENT_ID)
    3) For the desired output I want to see when I pass doc_num as 3
    create table HR.SDT_PAY_DET_DOC_THREE (
    STUDENT_ID NUMBER(10),
    STUDENT_NAME VARCHAR2(50),
    DOC_NUM NUMBER(5),
    C_AMOUNT NUMBER(10),
    primary key(STUDENT_ID)
    Now I shall insert values in each of the above three tables:
    INSERT ALL
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1112,'James',1,40)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1113,'David',1,40)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1114,'Michael',1,40)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1115,'Joseph',null,null)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1116,'Juliet',null,null)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1117,'Monica',null,null)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1118,'William',null,null)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1119,'Karen',null,null)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1120,'Erika',null,null)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1121,'Noah',null,null)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1122,'Angelina',null,null)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1123,'Gabriela',null,null)
    INTO HR.SDT_PAY_DET_DOC_ONE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1124,'Sofia',null,null)
    SELECT * FROM DUAL;
    INSERT ALL
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1112,'James',2,80)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1113,'David',2,80)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1114,'Michael',2,80)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1115,'Joseph',2,40)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1116,'Juliet',2,40)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1117,'Monica',null,null)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1118,'William',null,null)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1119,'Karen',null,null)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1120,'Erika',null,null)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1121,'Noah',null,null)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1122,'Angelina',null,null)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1123,'Gabriela',null,null)
    INTO HR.SDT_PAY_DET_DOC_TWO(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1124,'Sofia',null,null)
    SELECT * FROM DUAL;
    INSERT ALL
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1112,'James',3,120)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1113,'David',3,120)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1114,'Michael',3,120)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1115,'Joseph',3,80)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1116,'Juliet',3,80)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1117,'Monica',3,40)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1118,'William',3,40)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1119,'Karen',null,null)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1120,'Erika',null,null)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1121,'Noah',null,null)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1122,'Angelina',null,null)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1123,'Gabriela',null,null)
    INTO HR.SDT_PAY_DET_DOC_THREE(STUDENT_ID,STUDENT_NAME,DOC_NUM,C_AMOUNT) VALUES (1124,'Sofia',null,null)
    SELECT * FROM DUAL;
    Thanks & Regards
    Hawker

  • READ XML file + XSD file to create an load data in a table

    Hi gurus,
    What way you will use to create and load data in a SQL TABLE by reading xml+xsd files.
    Can you provide my a running example?
    Is it better to stage the XML structure + XML data in the database before creating/loading the real SQL table. (the size of the XML file could me 1GB)

    See the FAQ..
    For very large files make sure
    1. Use FTP to load...
    2. Ensure schema is registered with storeVarrayAsTable="true" and maintainDOM="false'
    3. Install latest 10.2.0.3.0...
    or
    Look for SaxLoader example in the FAQ
    -M

  • Shadow tables that have been created via the new partitioning schema

    Hi,
         Complete Partitioning :
                    In a complete partitioning, the fact table of the infocube are fully converted using shadow
    tables that have been created via the new partitioning schema.
                   in the above Explanation what is the meaning of shadow tables which perform the
                   partitioning of an info cube.

    Hi
    Shadow tables have the namespace /BIC/4F<Name of InfoCube> or /BIC/4E<Name of InfoCube>.
    Complete Partitioning
    Complete Partitioning fully converts the fact tables of the InfoCube. The system creates shadow tables with the new partitioning schema and copies all of the data from the original tables into the shadow tables. As soon as the data is copied, the system creates indexes and the original table replaces the shadow table. After the system has successfully completed the partitioning request, both fact tables exist in the original state (shadow table), as well as in the modified state with the new partitioning schema (original table). You can manually delete the shadow tables after repartitioning has been successfully completed to free up the memory. Shadow tables have the namespace /BIC/4F<Name of InfoCube> or /BIC/4E<Name of InfoCube>.
    You can only use complete repartitioning for InfoCubes. A heterogeneous state is possible. For example, it is possible to have a partitioned InfoCube with non partitioned aggregates. This does not have an adverse effect on functionality. You can automatically modify all of the active aggregates by reactivating them.
    Hope it helps and clear

  • Query to fetch all tables created by user

    Hi All,
    I am Joshi. I would like to draft a query which will return all the tables created by me in one day. Can anyone give me the query to do so.
    Thanking you
    Joshi

    To get list of tables owned by X and created on 16-May-2007
    ALTER SESSION SET NLS_DATE_FORMAT='MM/DD/YYYY HH24:MI:SS';
    SELECT OBJECT_NAME,CREATED
    FROM DBA_OBJECTS
    WHERE OBJECT_TYPE='TABLE'
    AND OWNER='X'
    AND CREATED BETWEEN '05/16/2007 00:00:00' AND '05/16/2007 23:59:59';
    Change the date to the one of your choice
    Regards

  • Creating table partitions via Common Format Designer

    I am looking for a way to create table partitions via the Common Format Designer in my Models.
    As far as I see this is not something that ODI can handle with the out of the box install.
    Is this sth that can be added as part of an action or similar?
    thanks
    uli

    Hi Uli,
    Partitions are not yet defined in the ODI metadata, you could add a step in the DDL procedure generated by CFD that would handle the creation of the partition.
    Thanks,
    Julien

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Import fails with unable to extend table CUSTOM.CASA_TRAN_HIST_UPLD by 6999

    Hi,
    I have taken export backup of table from 9.2.0.4 on AIX & trying to import in 11.1.0.7.0 on AIX
    while importing im getting the following error.
    ORA-01653: unable to extend table CUSTOM.CASA_TRAN_HIST_UPLD by 699912 in tablespace CUSTOM
    As the table size is 37G , total free space in tablespace is 40G,
    & no index on the table.
    following are sum lines from import file
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V09.02.00 via direct path
    import done in US7ASCII character set and UTF8 NCHAR character set
    import server uses AL32UTF8 character set (possible charset conversion)
    export server uses AL16UTF16 NCHAR character set (possible ncharset conversion)
    . importing DATAMIG's objects into CUSTOM
    . . importing table "CASA_TRAN_HIST_UPLD"
    IMP-00058: ORACLE error 1653 encountered
    ORA-01653: unable to extend table CUSTOM.CASA_TRAN_HIST_UPLD by 699912 in tablespace CUSTOM
    IMP-00028: partial import of previous table rolled back: 62844421 rows rolled back
    IMP-00017: following statement failed with ORACLE error 1917:
    "GRANT SELECT ON "CASA_TRAN_HIST_UPLD" TO "BSGUSER""
    IMP-00003: ORACLE error 1917 encountered
    ORA-01917: user or role 'BSGUSER' does not exist
    Import terminated successfully with warnings.
    is there any to resolve the issue.
    how to change NCHAR set for import.
    Thanks

    Hello,
    which & how i can set character set for import.About the Character Set, it's a setting at the Database creation. You may check it by using the following query on the Source and Target Databases:
    select * from v$nls_parameters; The NLS_CHARACTERSET will give you the Character set of the Database.
    It cannot be changed easily. It may imply a Database creation and export/import of data ( see Note *225912.1* ).
    Else, when you export (with the Original Export/Import utility) it's recommended to set the NLS_LANG parameter.
    The NLS_LANG parameter has 3 components:
    - Language
    - Territory
    - Client Character Set
    A wrong setting of the NLS_LANG may lead to conversion. However starting with *9i* most data is exported with the Character Set of the Database regardless the NLS_LANG setting. The following note may give you some details about it:
    Export/Import and NLS Considerations [ID 15095.1]Hope this help.
    Best regards,
    Jean-Valentin

Maybe you are looking for

  • I want rid of Yosemite but can't do it! (Bluetooth issue)

    I should have heeded the warning signs! I've been running Yosemite since the end of October and, despite the fact that my well-used Macbook Pro is already five years old, it's made life frustratingly slow. But I took that as a sign that I needed to s

  • Unable to Free Transform

    Hi all I have lost my free transform handles on all objects. I can transform by typing in values but when selecting Free Transform after right clicking (or using drop down menu), nothing happens. I had recently installed a few extensions, namely Toon

  • Configuring Outbound and Inbound SMTP mails with SUN Java messaging system

    hi all, i am new to Solaris i have deployed SUN java comuunication Suite. How do i configure my messaging server to send outgoing mails through existing gateway and recieve inbound mails from the same gateway? Currently my server is connected to the

  • Numbers hangs on documents with references to other sheets

    When I made the first edit (only) on a Numbers document that has references to other sheets in that document, Numbers hangs for about 2 minutes. (This problem doesn't happen on documents without references.) This has been true for some time and I hop

  • Safari not opening in landscape mode when opening links from mail

    Hello, Since the update to iOS8 I experience following: when using my iphone in landscape mode, opening links from the default mail app doesn't open Safari in landscape mode. I have to manually tilt the phone to portrait mode and back to get landscap