Data Load Tables - Shared components

Hello All,
There is a section that is called Data Loading in Shared components in every application that says: A Data Load Table is an existing table in your schema that has been selected for use in the data loading process, to upload data. Use Data Load Tables to define tables for use in the Data Loading create page wizard.
The question is: How ca i select a table in my schema for use in the data loading process to upload data using the wizard?
There is a packaged application that is called Sample Data loading. That sample is use for specific tables right? I tried to change those tables for the ones that I want to use but I could not because I could not add the tables that I want to use....
Thank you.

Hi,
The APEX version is Application Express 4.2.3.00.07.
The application sample for data loading it created the data loading entry in shared components by default. In that part, I don't have the option to create a new one for the table that I want to load the data. I tried to modify that entry putting the table that I want, but i couldn't, because the table that it has it's not editable.
I tried to modify the Data Loading page that the application sample created but I couldn't. I can't change the source for the table that I want.
I have created a workspace at apex.oracle.com. If you want I can give you credentials to help me please, but I need your email for create the user for you. Thank you.
Bernardo

Similar Messages

  • Issue with Data Load Table

    Hi All,
           i am facing issue with apex 4.2.4 ,using the  Data Load Table concept's and in this look up used the
          Where Clause option  ,it seems to be not working this where clause ,Please help me on this

    hi all,
        it looks this where clause not filter with 'N'  data ,Please help me ,how to solve this or help me on this

  • Where Clause in Table Lookups for Data Load

    Hello,
    In Shared Components I created in Data Load Table. In this Data Load Table I added a Table Lookup. On the page to edit the Table Lookup, there is a field called Where Clause. I tried to add a Where Clause to my Table Lookup in this field but it seems that it has no effect on the Data Load process.
    Does someone know how to use this Where Clause field?
    Thanks,
    Seb

    Hi,
    I'm having the same problem with the where clause being ignored in the table lookup, is this a bug and if so is there a work around?
    Thanks in advance

  • 4.2.3/.4 Data load wizard - slow when loading large files

    Hi,
    I am using the data load wizard to load csv files into an existing table. It works fine with small files up to a few thousand rows. When loading 20k rows or more the loading process becomes very slow. The table has a single numeric column for primary key.
    The primary key is declared at "shared components" -> logic -> "data load tables" and is recognized as "pk(number)" with "case sensitve" set to "No".
    While loading data, these configuration leads to the execution of the following query for each row:
    select 1 from "KLAUS"."PD_IF_CSV_ROW" where upper("PK") = upper(:uk_1)
    which can be found in the v$sql view while loading.
    It makes the loading process slow, because of the upper function no index can be used.
    It seems that the setting of "case sensitive" is not evaluated.
    Dropping the numeric index for the primary key and using a function based index does not help.
    Explain plan shows an implicit "to_char" conversion:
    UPPER(TO_CHAR(PK)=UPPER(:UK_1)
    This is missing in the query but maybe it is necessary for the function based index to work.
    Please provide a solution or workaround for the data load wizard to work with large files in an acceptable amount of time.
    Best regards
    Klaus

    Nevertheless, a bulk loading process is what I really like to have as part of the wizard.
    If all of the CSV files are identical:
    use the Excel2Collection plugin ( - Process Type Plugin - EXCEL2COLLECTIONS )
    create a VIEW on the collection (makes it easier elsewhere)
    create a procedure (in a Package) to bulk process it.
    The most important thing is to have, somewhere in the Package (ie your code that is not part of APEX), information that clearly states which columns in the Collection map to which columns in the table, view, and the variables (APEX_APPLICATION.g_fxx()) used for Tabular Forms.
    MK

  • Data load component - add new column name alias

    Apex 4.2.2.
    Using the data load wizard a list of column and column name aliases had been created.
    When looking at the component (shared components/Data load tables/column name aliases) it is possible to edit and delete a column alias there. Unfortunatly it does not seem possible to add a new alias. Do I overlook something, or is there a workaround for this?

    Try this:
    REPORT ztest LINE-SIZE 80 MESSAGE-ID 00.
    DATA: name_int TYPE TABLE OF v_usr_name WITH HEADER LINE.
    DATA: BEGIN OF it_bkpf OCCURS 0.
            INCLUDE STRUCTURE bkpf.
    DATA:   username LIKE v_usr_name-name_text,
          END   OF it_bkpf.
    SELECT * FROM bkpf INTO TABLE it_bkpf UP TO 1000 ROWS.
    LOOP AT it_bkpf.
      name_int-bname = it_bkpf-usnam.
      APPEND name_int.
    ENDLOOP.
    SORT name_int BY bname.
    DELETE ADJACENT DUPLICATES FROM name_int.
    LOOP AT it_bkpf.
      READ TABLE name_int WITH KEY
        bname = it_bkpf-usnam
        BINARY SEARCH.
      IF name_int-name_text IS INITIAL.
        SELECT SINGLE name_text
          FROM v_usr_name
          INTO name_int-name_text
          WHERE bname = it_bkpf-usnam.
        MODIFY name_int index sy-tabix.
      ENDIF.
      it_bkpf-username = name_int-name_text.
      MODIFY it_bkpf.
    ENDLOOP.
    Rob

  • Column number limitation in apex 4.1 data loader?

    Hi all!
    Is there a limitation of column numbers in the APEX 4.1 data loading page?
    My DB Object has 59 columns and they are all available for example in the unique colum drop boxes of my data load table definition.
    On page two of the wizard created data load pages 'data/table mapping' only 45 columns are shown. These columns are correctly inserted into my table. The last 14 columns are ignored.
    So does anyone know if there is a limitation and can it be extended?
    Thanks for any answer and regards
    Kai

    No, I do not have a solution for it.
    Splitting the file into fewer columns each, with the primary key repeated and then stitching the tables up post upload might be easier than using other routes.
    And then there are always the good old SQL Loader and External Tables. But integrating these into Apex is not easy as Apex runs on server and the file is typically on the local HDD of client.
    Regards,

  • DIRECT MODE에서의 PARALLEL DATA LOADING

    제품 : ORACLE SERVER
    작성날짜 : 1999-08-10
    Direct mode에서의 parallel Data Loading
    =======================================
    SQL*Loader는 동일 table에 대한 direct mode에서의 parallel data load를
    지원하고 있다. 이는 여러 session에서 동시에 데이타를 direct mode로 올림으로써
    대용량의 데이타의 로드 속도를 향상시킬 수 있다. 특히, data file을 물리적으로
    다른 disk에 위치시킴으로써 더욱 큰 효과를 낼 수 있다.
    1. 제약사항
    - index가 없는 table에만 로드 가능
    - APPEND mode에서만 가능. (replace, truncate, insert mode는 지원 안됨)
    - parallel query option이 설치되어 있어야 함.
    2. 사용방법
    각각의 data file을 load할 control file들을 생성한 후, 차례차례 수행하면 됨.
    $sqlldr scott/tiger control=load1.ctl direct=true parallel=true&
    $sqlldr scott/tiger control=load2.ctl direct=true parallel=true&
    $sqlldr scott/tiger control=load3.ctl direct=true parallel=true
    3. constraint
    - enable parameter를 사용하면 데이타 로드 작업이 모두 끝난 후, 자동으로
    constraint을 enable시켜 준다. 그러나 종종 enable되지 못하는 경우가
    있으므로 반드시 status를 확인해야 한다.
    - primary key 나 unique key constraint이 걸려 있는 경우, 데이타 로드 후
    자동으로 enable할 때, index를 생성하느라 시간이 많이 소모될 수 있다.
    따라서 data만 parallel direct 모드로 로드 한 후, index를 따로 parallel로
    생성하는 것이 성능 측면에서 바람직하다.
    4. storage 할당 방법 및 주의사항
    direct로 데이타를 로드하는 경우 다음 절차를 따라 작업한다.
    - 대상 table의 storage 절에 기초해 temporary segment를 생성한다.
    - 마지막 데이타 로드 작업이 끝난 후, 마지막에 할당되었던 extent의 비어 있는
    즉, 사용하지 않은 부분을 trim 한다.
    - temporary segment에 해당되어 있는 extent들의 header 정보를
    변경하고, HWM 정보를 수정하여, 대상 table에 extent가 소속되도록 한다.
    이러한 extent 할당 방법은 다음과 같은 문제를 야기시킨다.
    - parallel data load에서는 table 생성 시 할당된 최초 INITIAL extent를
    사용하지 않는다.
    - 정상적인 extent 할당 rule을 따르지 않고, 각 process는 next extent에
    정의된 크기를 할당하여 data load를 시작하고, 새로운 extent가 요구될
    때에는 pctincrease 값을 기준으로 할당되게 되는데, 이는 process 간에
    독립적으로 계산되어진다.
    - fragmentation이 심하게 발생할 수 있다.
    fragmentation을 줄이고, storage 할당을 효율적으로 하기 위해서는
    - INITIAL을 2-5 block 정도로 작게 하여 table을 생성한다.
    - 7.2 이상 버젼에서는 options 절에서 storage parameter를 지정하여
    사용한다. 이 때 initial과 next를 동일한 크기로 주는 것이 바람직하다.
    OPTIONS (STORAGE=(MINEXTENTS n
    MAXEXTENTS n
    INITIAL n K
    NEXT n K
    PCTINCREASE n))
    - options 절을 control file에 기술하는 경우 반드시 insert into tables
    절 다음에 기술해야 한다.

    First, thanks for the hints. In the meanwhile I found some other documentation regarding my issue.
    As far as I understand, if I want to load parallel, I have to create multiple InfoPackes and split up the records in the selection criteria, e.g.:
    - InfoPackage 1, Students 1 - 10.000
    - InfoPackage 2, Students 10.001 - 20.000
    ...and so on.
    Following that I need to create a Process Chain that starts loading all packages at the same point in time.
    Now...when the extractor is called, there are two parts that it runs through:
    - Initialization of the extractor
    - Fetching of records
    ( via flag i_initflag in the extractor ).
    In the initialization I want to run the pre-fetch module. I worked everything regarding that already. Only when the pre-fetch is finished, will the actual data loading start.
    What I am not sure about is: Is this flag (the i_initflag mentioned above) passed for each InfoPackage that is started?
    Jeroen

  • In witch table the information of Shared Components are stored?

    Hi all,
    I want to get the some information from 'Shared Components' especially the 'Version' in category 'Definition' -> 'Name' using an SQL query. In witch table could I find this information?
    cheers,
    ben

    Ben,
    Check out the Application Express data dictionary views, in this case the view apex_applications against which the following query will show you the version attribute of an application XXX:
    select version from apex_applications where application_id=XXX;
    Queries against these views can be issued from the SQL Workshop after you authenticate to the workspace of interest or they can be issued from within your application.
    Scott

  • Sharing static data between all WD components

    Hello
    I'm looking for a way to share static data between all WD sessions of a WD application. Static data is static (language depended) value lists for instance. How can this be done in WD for ABAP?
    Is this recommended at all?
    Regards, Mathias

    Hi,
    if these lists are static, you can use z-tables with in the key a 'spras' field.
    do the selection while loading the application with sy-langu on those tables to fetch the language dependant lists
    (example table makt, for material descriptions)
    if you mean you want to share these data between several wda components of 1 application,
    you should use the same assistance class in all of them and fetch the data in the main component to store it in the assistance class.
    this class will be instantiated and all the other components will have access to the data (instance will be recognized automatically as of SP11)
    grtz,
    Koen
    Edited by: Koen Labie on Mar 6, 2008 5:06 PM

  • Open HUB ( SAP BW ) to SAP HANA through DB Connection data loading , Delete data from table option is not working Please help any one from this forum

    Issue:
    I have SAP BW system and SAP HANA System
    SAP BW to SAP HANA connecting through a DB Connection (named HANA)
    Whenever I created any Open Hub as Destination like DB Table with the help of DB Connection, table will be created at HANA Schema level ( L_F50800_D )
    Executed the Open Hub service without checking DELETING Data from table option
    Data loaded with 16 Records from BW to HANA same
    Second time again executed from BW to HANA now 32 records came ( it is going to append )
    Executed the Open Hub service with checking DELETING Data from table option
    Now am getting short Dump DBIF_RSQL_TABLE_KNOWN getting
    If checking in SAP BW system tio SAP BW system it is working fine ..
    will this option supports through DB Connection or not ?
    Please follow the attachemnet along with this discussion and help me to resolve how ?
    From
    Santhosh Kumar

    Hi Ramanjaneyulu ,
    First of all thanks for the reply ,
    Here the issue is At OH level ( Definition Level - DESTINATION TAB and FIELD DEFINITION )
    in that there is check box i have selected already that is what my issue even though selected also
    not performing the deletion from target level .
    SAP BW - to SAP HANA via DBC connection
    1. first time from BW suppose 16 records - Dtp Executed -loaded up to HANA - 16 same
    2. second time again executed from BW - now hana side appaended means 16+16 = 32
    3. so that i used to select the check box at OH level like Deleting data from table
    4. Now excuted the DTP it throws an Short Dump - DBIF_RSQL_TABLE_KNOWN
    Now please tell me how to resolve this ? will this option is applicable for HANA mean to say like , deleting data from table option ...
    Thanks
    Santhosh Kumar

  • FDMEE Import error "No periods were identified for loading data into table 'AIF_EBS_GL_BALANCES_STG'

    Hi,
    We are having trouble while importing one ledger 'GERMANY EUR GGAAP'. It works for Dec 2014 but while trying to import data for 2015 it gives an error.
    Import error shows " RuntimeError: No periods were identified for loading data into table 'AIF_EBS_GL_BALANCES_STG'."
    I tried all Knowledge docs from Oracle support but no luck. Please help us resolving this issue as its occurring in our Production system.
    I also checked all period settings under Data Management> Setup> Integration Setup > Global Mapping and Source Mapping and they all look correct.
    Also its only happening to one ledger rest all ledgers are working fine without any issues.
    Thanks

    Hi,
    there are some Support documents related to this issue.
    I would suggest you have a look to them.
    Regards

  • How can I load data into table with SQL*LOADER

    how can I load data into table with SQL*LOADER
    when column data length more than 255 bytes?
    when column exceed 255 ,data can not be insert into table by SQL*LOADER
    CREATE TABLE A (
    A VARCHAR2 ( 10 ) ,
    B VARCHAR2 ( 10 ) ,
    C VARCHAR2 ( 10 ) ,
    E VARCHAR2 ( 2000 ) );
    control file:
    load data
    append into table A
    fields terminated by X'09'
    (A , B , C , E )
    SQL*LOADER command:
    sqlldr test/test control=A_ctl.txt data=A.xls log=b.log
    datafile:
    column E is more than 255bytes
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)
    1     1     1     1234567------(more than 255bytes)

    Check this out.
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96652/ch06.htm#1006961

  • Is it  possible to upload few column in table through  Apex Data Loading

    Hi All,
    I have to do upload into the table through a csv file . The table's primary key i have to load the rest through user's uploaded file. Is it possible to do the data loading to the table only to required columns and fill the other columns from backend. Or is there any other way to do this?

    Hi,
    Your query is not really clear.
    >
    Is it possible to do the data loading to the table only to required columns and fill the other columns from backend. Or is there any other way to do this?
    >
    How do you plan to "link" the rows from these 2 sets of data in the "backend"? There has to be a way to have a relation between them.
    Regards,

  • Comparison of Data Loading techniques - Sql Loader & External Tables

    Below are 2 techniques using which the data can be loaded from Flat files to oracle tables.
    1)     SQL Loader:
    a.     Place the flat file( .txt or .csv) on the desired Location.
    b.     Create a control file
    Load Data
    Infile "Mytextfile.txt" (-- file containing table data , specify paths correctly, it could be .csv as well)
    Append or Truncate (-- based on requirement) into oracle tablename
    Separated by "," (or the delimiter we use in input file) optionally enclosed by
    (Field1, field2, field3 etc)
    c.     Now run sqlldr utility of oracle on sql command prompt as
    sqlldr username/password .CTL filename
    d.     The data can be verified by selecting the data from the table.
    Select * from oracle_table;
    2)     External Table:
    a.     Place the flat file (.txt or .csv) on the desired location.
    abc.csv
    1,one,first
    2,two,second
    3,three,third
    4,four,fourth
    b.     Create a directory
    create or replace directory ext_dir as '/home/rene/ext_dir'; -- path where the source file is kept
    c.     After granting appropriate permissions to the user, we can create external table like below.
    create table ext_table_csv (
    i Number,
    n Varchar2(20),
    m Varchar2(20)
    organization external (
    type oracle_loader
    default directory ext_dir
    access parameters (
    records delimited by newline
    fields terminated by ','
    missing field values are null
    location ('file.csv')
    reject limit unlimited;
    d.     Verify data by selecting it from the external table now
    select * from ext_table_csv;
    External tables feature is a complement to existing SQL*Loader functionality.
    It allows you to –
    •     Access data in external sources as if it were in a table in the database.
    •     Merge a flat file with an existing table in one statement.
    •     Sort a flat file on the way into a table you want compressed nicely
    •     Do a parallel direct path load -- without splitting up the input file, writing
    Shortcomings:
    •     External tables are read-only.
    •     No data manipulation language (DML) operations or index creation is allowed on an external table.
    Using Sql Loader You can –
    •     Load the data from a stored procedure or trigger (insert is not sqlldr)
    •     Do multi-table inserts
    •     Flow the data through a pipelined plsql function for cleansing/transformation
    Comparison for data loading
    To make the loading operation faster, the degree of parallelism can be set to any number, e.g 4
    So, when you created the external table, the database will divide the file to be read by four processes running in parallel. This parallelism happens automatically, with no additional effort on your part, and is really quite convenient. To parallelize this load using SQL*Loader, you would have had to manually divide your input file into multiple smaller files.
    Conclusion:
    SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table. However, we can always copy the data from external tables to Oracle Tables using DB links.

    Please let me know your views on this.

  • Loading this xml data into tables

    Hello,
    I am having a problem loading this XML file into tables. The xml file structure is
    <FILE>
    <ACCESSION>
    some ids
    <INSTANCE>
    some data
    <VARIATION
    </VARIATION>
    <VARIATION>
    </VARIATION> variation gets repeated a number of times
    <ASSEMBLY>
    </ASSEMBLY>
    <ASSEMBLY>
    </ASSEMBLY> Assembly gets repeated a number of times.
    </INSTANCE>
    </ACCESSION>
    </FILE>
    I created a table which has the structure:
    create table accession(
    accession_id varchar2(20),
    Instance instance_type);
    create or replace type instance_type as object
    (method varchar2(20),
    class varchar2(20),
    source varchar2(20),
    num_char number(10),
    variation variation_type,
    assembly assembly_type)
    create or replace type variation_type as object
    (value varchar2(2),
    count number(10),
    frequency number(10),
    pop_id varchar2(10)
    Created a similiar type for assembly.
    When I load it, I could only store the first variation data but not the subsequent ones. Similarly for assembly I could only store the first data but not the subsequent ones.
    Could anyone let me know how I could store this data into tables? I have also included a sample XML file in this message.
    Thank You for your help.
    Rama.
    Here is the sample xml file.
    <?xml version="1.0" ?>
    - <FILE>
    - <ACCESSION>
    <ACCESSION_ID>accid1</ACCESSION_ID>
    - <INSTANCE>
    <METHOD>method1</METHOD>
    <CLASS>class1</CLASS>
    <SOURCE>source1</SOURCE>
    <NUM_CHAR>40</NUM_CHAR>
    - <VARIATION>
    <VALUE>G</VALUE>
    <COUNT>5</COUNT>
    <FREQUENCY>66</FREQUENCY>
    <POP1>pop1</POP1>
    <POP2>pop1</POP2>
    </VARIATION>
    <VARIATION>
    <VALUE>C</VALUE>
    <COUNT>2</COUNT>
    <FREQUENCY>33</FREQUENCY>
    <POP_ID1>pop2</POP_ID1>
    </VARIATION>
    - <ASSEMBLY>
    <ASSEMBLY_ID>1</ASSEMBLY_ID>
    <BEGIN>180</BEGIN>
    <END>180</END>
    <TYPE>2</TYPE>
    <ORI>-</ORI>
    <OFFSET>0</OFFSET>
    </ASSEMBLY>
    - <ASSEMBLY>
    <ASSEMBLY_ID>2</ASSEMBLY_ID>
    <BEGIN>235</BEGIN>
    <END>235</END>
    <TYPE>2</TYPE>
    <ORI>-</ORI>
    <OFFSET>0</OFFSET>
    </ASSEMBLY>
    </INSTANCE>
    </ACCESSION>
    </FILE>

    Hello,
    I could figure out how to load this XML file by using cast(multiset(
    So never mind.
    Thank You.
    Rama.

Maybe you are looking for

  • HTTP Server is not currently supported with Solaris 10

    I am having a problem to start the HTTP Server on Solaris 10 (I am installing HTTP Server as part of HTMLDB installation on Solaris 10). I had a problem to install it at first, however I've found on 'AskTom' that unsetting ORA_NLS33 helps and it did.

  • Passing a parameter to a Web start app

    Hi, I'm having trouble getting a single integer value into my web start application. I understand how you do this with a JNLP however I need a different value for this integer depending on a PHP script. With applets it was fairly simple but with web

  • How do i backup/share kindle books on my ipad?

    how do i backup/share kindle books on my ipad?

  • Settings changing when adding to another movie

    Each time I attach a new chapter (mini movie) PROJECT to the already existing main movie PROJECT the settings change. I have tried adding on the front with the new chapter as well as trying to add the main movie to the mini movie. Is there a way to l

  • File Sync Will Not Resume

    I puased file syncing when I lost some data a few weeks back (thank you to the Adobe Staff who assisted me with this issue and worked to rectify the problem and recover my data). However now, after updating my Creative Cloud application, I cannot res