Creation of staging table - quickest way.

Hi,
I need to create a staging table with roughly 370 fields. The sources' specs are not clear and some of the staging table's fields, too. So effectively, first of all I need to understand the staging fields and the source fields. The end user has been given some target date with some generic guess-work (not by me).
I would like to know the best approach to complete the activity quicker - at least, document all the staging fields and source fields. What I have now is one Excel file with four columns - Staging Field, Source table and field, Staging Field Type and Staging Field Length. Out of the total 370 staging fields, I have just completed 50% and the target date is approaching faster - I have not touched developing Oracle package yet.
This could be a generic question, but in order to meet deadline, could you please throw some light on the quickest way/tips to complete the mapping documentation (or creating the staging table)?
Thanks in advance,
Manoj.

MDixit wrote:
Hi,
I need to create a staging table with roughly 370 fields. The sources' specs are not clear and some of the staging table's fields, too. So effectively, first of all I need to understand the staging fields and the source fields. The end user has been given some target date with some generic guess-work (not by me).
I would like to know the best approach to complete the activity quicker - at least, document all the staging fields and source fields. What I have now is one Excel file with four columns - Staging Field, Source table and field, Staging Field Type and Staging Field Length. Out of the total 370 staging fields, I have just completed 50% and the target date is approaching faster - I have not touched developing Oracle package yet.
This could be a generic question, but in order to meet deadline, could you please throw some light on the quickest way/tips to complete the mapping documentation (or creating the staging table)?
Thanks in advance,
Manoj.The first thing that comes to mind is the unclear specifications. (actually the first thing that came to my mind was that tables have columns not fields, but anyway......)
you need to clarify what is coming from the source system before you will be able to intelligently map that to your target system. you may have to push back on your client and tell them that the deadline can't be met unless they provide more information about their source system.
how have they chosen those 370 columns? how did they know these needed to be part of whatever process you are completing? if they know that these need to be moved to a target system then they should be able to tell you why.

Similar Messages

  • Whats the quickest way to export packages,tables etc from one enviroment

    Hi
    whats the quickest way of more loads of packages, tables, indexes etc. from one enviroment to another?
    I did some things in apex.oracle.com workspace to test apex now I want to move it across to my xe installation.

    Hello,
    2 'fast' options really -
    1) Export of application + Export/DataPump of schema
    This works if you want a complete 'mirror' from one environment to another of the schema objects
    2) Supporting Objects
    Bundle all your requirements together with the application export.
    The Supporting Objects feature absolutely rocks and yet very very (very!) few people seem to use it.
    John.
    http://jes.blogs.shellprompt.net
    http://apex-evangelists.com

  • How to store data file name in one of the columns of staging table

    My requirement is to load data from .dat file to oracle staging table. I have done following steps:
    1. Created control file and stored in bin directory.
    2. Created data file and stored in bin directory.
    3. Registered a concurrent program with execution method as SQL*Loader.
    4. Added the concurrent program to request group.
    I am passing the file name as a parameter to concurrent program. When I am running the program, the data is getting loaded to the staging table correctly.
    Now I want to store the filename (which is passed as a parameter) in one of the columns of staging table. I tried different ways found through Google, but none of them worked. I am using the below control file:
    OPTIONS (SKIP = 1)
    LOAD DATA
    INFILE '&1'
    APPEND INTO TABLE XXCISCO_SO_INTF_STG_TB
    FIELDS TERMINATED BY ","
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    COUNTRY_NAME
    ,COUNTRY_CODE
    ,ORDER_CATEGORY
    ,ORDER_NUMBER
    ,RECORD_ID "XXCISCO_SO_INTF_STG_TB_S.NEXTVAL"
    ,FILE_NAME CONSTANT "&1"
    ,REQUEST_ID "fnd_global.conc_request_id"
    ,LAST_UPDATED_BY "FND_GLOBAL.USER_ID"
    ,LAST_UPDATE_DATE SYSDATE
    ,CREATED_BY "FND_GLOBAL.USER_ID"
    ,CREATION_DATE SYSDATE
    ,INTERFACE_STATUS CONSTANT "N"
    ,RECORD_STATUS CONSTANT "N"
    I want to store file name in the column FILE_NAME stated above. I tried with and without constant using "$1", "&1", ":$1", ":&1", &1, $1..... but none of them worked. Please suggest me the solution for this.
    Thanks,
    Abhay

    Pl post details of OS, database and EBS versions. There is no easy way to achieve this.
    Pl see previous threads on this topic
    SQL*Loader to insert data file name during load
    Sql Loader with new column
    HTH
    Srini

  • Synchronizing Updates on a Staging Table

    Please help me out with the resolving the following issue:
    A load script is running for moving records from a data file to a staging table.
    After this script completes, there is a code to update two fields of the staging table.
    To do this the shell script runs a script (generate_ranges.sql). It takes a parameter of 5000000. It creates ranges based on this passed in number upto the total number of rows in the staging table. So say the staging table has 65,000,000 rows.
    This script will create a file that looks like the following (when 5000000 is passed in):
    1 | 5000000
    5000001 | 10000000
    10000001 | 15000000
    15000001 | 20000000
    20000001 | 25000000
    25000001 | 30000000
    30000001 | 35000000
    35000001 | 40000000
    40000001 | 45000000
    45000001 | 50000000
    50000001 | 55000000
    55000001 | 60000000
    60000001 | 65000000
    The script goes on to read the data file for each row and it calls a shell script and passes in each range. So in this case there are 13 ranges. What is happening is there are 13 seperate updates on the staging table happening in the background.
    The first update rows 1 - 5000000, the second rows 5000001 - 10000000 etc.
    So there are 13 updates happenng behind the scenes.
    The problem is that there is no way for the script to know that all updates are completed successfully before proceeding automatically. Right now I manually check to see that that all updates completed and then I restart the script at the next step. However we want to code to ensure that all the updates are done automatically and then move on in the script. So we need a way to count the number of candidate updates ( right now 13 but could be 14 or more in future) and know that all "x" updates completed, it may be the case that update (1-5000000) is taking 30 mins and the next update ( 5000001 - 10000000) is taking 35 mins, all updates iare running parallely, and only when after the 13 parallel updates are complete, the script can proceed with subsequent steps.
    So please help me out with fixing this problem programmatically.
    Thanks for your cooperation in advance.
    Regards,
    Ayan.

    Ayan,
    Are you really sure you want to update 65 million rows ?
    Alternative: create table as select <columns with 2 columns updated> from staging table;
    While using this approach, you probably don't need to split the update.
    Regards,
    Rob.

  • Sliding window sanario in PTF vs Availability of recently loaded data in the staging table for reporting purpose

    Hello everybody, I am a SQL server DBA and I am planning to implement table partitioning on some of our large tables in our data warehouse. I
    am thinking to design it using the sliding window scenario. I do have one concern though; I think the staging tables we use for new data loading and for switching out the old partition are going to be non-partitioned, right?? Well, I don't have an issue with
    the second staging table that is used for switching out the old partition. My concern is on the first staging table that we use it for switch in purpose, since this table is non-partitioned and holding the new data, HOW ARE WE GOING TO USE/access THIS DATA
    FOR REPORTING PURPOSE before we switch in to our target partitioned table????? say, this staging table is holding a one month worth of data and we will be switching it at the end of the month. Correct me if I am wrong okay, one way I can think of accessing
    this non-portioned staging table is by creating views, which we don’t want to change our codes.
    Do you guys share us your thoughts, experiences???
    We really appreciate your help.

    Hi BG516,
    According to your description, you need to implement table partitioning on some of our large tables in our data warehouse, the problem is that you need the partition table only hold a month data, please correct me if I have anything misunderstanding.
    In this case, you can create non-partitioned table, import the records which age is more than one month into the new created table. Leave the records which age is less than one month on the table in your data warehouse Then you need to create job to
    copy the data from partition table into non-partitioned table at the last day of each month. In this case, the partition table only contain the data for current month. Please refer to the link below to see the details.
    http://blog.sqlauthority.com/2007/08/15/sql-server-insert-data-from-one-table-to-another-table-insert-into-select-select-into-table/
    https://msdn.microsoft.com/en-us/library/ms190268.aspx?f=255&MSPPError=-2147217396
    If this is not what you want, please provide us more information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Front End not reflecting previously deleted entry while entering into Staging table

    Lets say I have table named 'Sales' as my Source. It contains columns Sal_ID and Sal_DESC
    There is Front End Entity created as Master Sales in Model Project. According staging table in database is ' stg.Sales_Leaf '.
    Now, I have entered data into stg.Sales_Leaf from Sales table for 'CODE' and 'NAME' value as Sal_ID and Sal_DESC, and my ImportType is '0'.
    As Sales table contains 23 rows, these are imported to stg.Sales_Leaf with ImportType '0'.
    I am able to see it in Front End also.
    Problem starts now. I delete some rows from Front END directly which contains CODE value  as 22 and 23.
    These gets deleted from Front End View, but now again I load Same 23 rows to stg.Sales_Leaf with same data (after truncating it).
    I should be able to see 23 records in Front End, but it is showing only 21 records.
    Two records with CODE value as 22 and 23 are not reflecting in Fornt End.
    I think it is because 'mdm.<table_code>' table associated with stg.Sales_Leaf, is reflecting Status_ID as 2.
    Please provide the way, how can I get data reflected in Front End.

    All error numbers between -20000 and -20999 are not Oracle errors but rather application errors coded by whoever wrote your software.
    As previously suggested ... see your DBA or manager.

  • Help needed in Loading excel data to staging table from OAF Page

    Hi All,
    We have a requirement from the client on loading of a excel sheet data into staging table using OAF page.
    We were able to load a CSV file into staging table via OAF. The approach we used is we created a item of style 'messageFileUpload', which would pick the CSV file from desktop and we wrote the logic on the controller to place the file into server and then sumit a concurrent program to load the data into the staging table.
    But client wants data from the excel file to be loaded into staging table. Is there any way(approach) by which we can convert the excel file data into .CSV file using OAF?
    Any help or pointers on this will be highly apperciated.
    Thanks,
    Chethana

    Hi,
    Read through this :
    Need to upload a CSV/Excel to a table in OAF page
    Thanks,
    Gaurav

  • How can I INSERT INTO from Staging Table to Production Table

    I’ve got a Bulk Load process which works fine, but I’m having major problems downstream.
    Almost everything is Varchar(100), and this works fine. 
    Except for these fields:
    INDEX SHARES, INDEX MARKET CAP, INDEX WEIGHT, DAILY PRICE RETURN, and DAILY TOTAL RETURN
    These four fields must be some kind of numeric, because I need to perform sums on these guys.
    Here’s my SQL:
    CREATE
    TABLE [dbo].[S&P_Global_BMI_(US_Dollar)]
    [CHANGE]     
    VARCHAR(100),
    [EFFECTIVE DATE]  
    VARCHAR(100),
    [COMPANY]  
    VARCHAR(100),
    [RIC]  
    VARCHAR(100),
    Etc.
    [INDEX SHARES]
    NUMERIC(18, 12),
    [INDEX MARKET CAP]
    NUMERIC(18, 12),
    [INDEX WEIGHT]
    NUMERIC(18, 12),
    [DAILY PRICE RETURN]
    NUMERIC(18, 12),
    [DAILY TOTAL RETURN]
    NUMERIC(18, 12),
    From the main staging table, I’m writing data to 4 production tables.
    CREATE
    TABLE [dbo].[S&P_Global_Ex-U.S._LargeMidCap_(US_Dollar)]
    [CHANGE]     
    VARCHAR(100),
    [EFFECTIVE DATE]  
    VARCHAR(100),
    [COMPANY]  
    VARCHAR(100),
    [RIC]  
    VARCHAR(100),
    Etc.
    [INDEX SHARES]
    FLOAT(20),
    [INDEX MARKET CAP]
    FLOAT(20),
    [INDEX WEIGHT] FLOAT(20),
    [DAILY PRICE RETURN]
    FLOAT(20),
    [DAILY TOTAL RETURN]
    FLOAT(20),,
    INSERT
    INTO [dbo].[S&P_Global_Ex-U.S._LargeMidCap_(US_Dollar)]
      SELECT
    [CHANGE],
    Etc.
    [DAILY TOTAL RETURN]
      FROM
    [dbo].[S&P_Global_BMI_(US_Dollar)]
      WHERE
    isnumeric([Effective Date])
    = 1
      AND
    [CHANGE] is
    null
      AND
    [COUNTRY] <>
    'US'
      AND ([SIZE] =
    'L' OR [SIZE]
    = 'M')
    The Bulk Load is throwing errors like this (unless I make everything Varchar):
    Bulk load data conversion error (truncation) for row 7, column 23 (INDEX SHARES).
    Msg 4863, Level 16, State 1, Line 1
    When I try to load data from the staging table to the production table, I get this.
    Msg 8115, Level 16, State 8, Line 1
    Arithmetic overflow error converting varchar to data type numeric.
    The statement has been terminated.
    There must be an easy way to overcome this, right.
    Please advise!
    Thanks!!
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    Nothing is returned.  Everything is VARCHAR(100).  the problem is this.
    If I use FLOAT(18) or REAL, I get exponential numbers, which is useless to me.
    If I use DECIMAL(18,12) or NUMERIC(18,12), I get errors. 
    Msg 4863, Level 16, State 1, Line 41
    Bulk load data conversion error (truncation) for row 7, column 23 (INDEX SHARES).
    Msg 4863, Level 16, State 1, Line 41
    Bulk load data conversion error (truncation) for row 8, column 23 (INDEX SHARES).
    Msg 4863, Level 16, State 1, Line 41
    Bulk load data conversion error (truncation) for row 9, column 23 (INDEX SHARES).
    There must be some data type that fits this!
    Here's a sample of what I'm dealing with.
    -0.900900901
    9.302325581
    -2.648171501
    -1.402805723
    -2.911830584
    -2.220960866
    2.897762349
    -0.219640074
    -5.458448607
    -0.076626094
    6.710940231
    0.287200186
    0.131682908
    0.124276221
    0.790818723
    0.420505119
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • What is the quickest way to incorporate a new template's styles into an existing template?

    Recently my company have developed a new template. I am creating a document using much content which already exists in an older template with different paragraph styles, table formats and page formats. What is the quickest way to incorporate the new styles so I don't have to individually update the table contents. The paragraph catelog and tables have different titles in the two documents.
    Thanks,
    Niall.

    Do you know where I can find a copy of Template Mapper? I went to http://ig5authoringtools.com/plugin-directory/single-sourcing/templatemapper/ and it is not there.
    Thanks in advance

  • Pros/Cons of replicating to files versus staging tables

    I am new to GoldenGate and am trying to figure out pros/cons of replicating to flatfiles to be processed by an ETL tool versus replicating directly to staging tables. We are using GoldenGate to source data from multiple transaction systems to flatfiles and then using Informatica to load thousands of flatfiles to our ODS staging. Trying to figure out if it would be better just to push data directly to staging tables. I am not sure which is better in terms of recovery, reconcilliation, etc. Any advice or thoughts on this would be appreciated.

    Hi,
    My Suggestion would be to push the data from multiple source systems directly to staging table and then populate target system using ELT tool like ODI.
    Oracle Data Integrator can be combined with Oracle Golden Gate (OGG) , that provides a cross-platform data replication and changed data capture. Oracle Golden Gate worked in a similar way to Oracle’s asynchronous change data capture but handles greater volumes and works across multiple database platforms.
    Source -> Staging -> Target
    ODI-EE supports all leading data warehousing platforms, including Oracle Database, Teradata, Netezza, and IBM DB2. This is complemented by the Oracle GoldenGate architecture, which decouples source and target systems, enabling heterogeneity of databases as well as operating systems and hardware platforms. Oracle GoldenGate supports a wide range of database versions for Oracle Database, SQL Server, DB2 z/Series and LUW, Sybase ASE, Enscribe, SQL/MP and SQL/MX, Teradata running on Linux, Solaris, UNIX, Windows, and HP NonStop platforms as well as many data warehousing appliances including Oracle Exadata, Teradata, Netezza, and Greenplum. Companies can quickly and easily involve new or different database sources and target systems to their configurations by simply adding new Capture and Delivery processes.
    ODI-EE and Oracle GoldenGate combined enable you to rapidly move transactional data between enterprise systems:
    Real-time data. - Immediately capture, transform, and deliver transactional data to other systems with subsecond latency. Improve organizational decision-making through enterprise-wide visibility into accurate, up-to-date information.
    Heterogeneous. - Utilize heterogeneous databases, packaged or even custom applications to leverage existing IT infrastructure. Use Knowledge Modules to speed the time of implementation.
    Reliability. - Deliver all committed records to the target, even in the event of network outages. Move data without requiring system interruption or batch windows. Ensure data consistency and referential integrity across multiple masters, back-up systems, and reporting databases.
    High performance with low impact. - Move thousands of transactions per second with negligible impact on source and target systems. Transform data at high performance and efficiency using E-LT. Access critical information in real time without bogging down production systems.
    Please refer to below links for more information on configuration of ODI-OGG.
    http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/odi/odi_11g/odi_gg_integration/odi_gg_integration.htm
    http://www.biblogs.com/2010/03/22/configuring-odi-10136-to-use-oracle-golden-gate-for-changed-data-capture/
    Hope this information helps.
    Thanks & Regards
    SK

  • Staging Table Challenge in PL SQL Proc

    Hi Guys,
    Need your idea for my below requirement.
    I have two DB's and daily we'll sync Data from DB1 to DB2 (similar structures) using some plain SQL queries as below
    Data synching to only few tables that too only to few columns on some conditions (and we are using nearly 25 temp tables to copy data) but we were unable to track the data what is getting updated dailiy..?
    But we badly need to track the data that is getting updated daily, so please suggest me how could I do this...?
    Staging Tables..? Note: (we can't maintain staging tables for all the temp tables and every now and then we'll change the DB tables structure also.)
    Or else is there any other way to achieve this....?
    At the end we should have the data that got updated each day and reports on that data.
    Please help me.
    Cheers,
    Naresh

    Naresh wrote:
    Hi Guys,
    Need your idea for my below requirement.
    I have two DB's and daily we'll sync Data from DB1 to DB2 (similar structures) using some plain SQL queries as below
    Data synching to only few tables that too only to few columns on some conditions (and we are using nearly 25 temp tables to copy data) but we were unable to track the data what is getting updated dailiy..?
    But we badly need to track the data that is getting updated daily, so please suggest me how could I do this...?
    Staging Tables..? Note: (we can't maintain staging tables for all the temp tables and every now and then we'll change the DB tables structure also.)
    Or else is there any other way to achieve this....?
    At the end we should have the data that got updated each day and reports on that data.
    Please help me.
    Cheers,
    NareshChange Data Capture

  • Quickest way to see if a city (52000 records) is inside a string

    Hi,
    I am reading emails lines  in c# and interop.outlook, and I need to know if in that line there is a city. The table is around 52000 records! Sql Server 2012
    Thks in advance

    think table with 52000 cities  with field name as primary key
    seq ,  name
    1     Lisbon,
    2     London
    3      Paris
    4     Venice
    5     Istanbul
    6      Prague
    7      Florence,
    52000  Setubal
    Now I read one email line that says
    "I will be at Prague the 20 June. Get me at Airport"
    another line read
    "I will arrive at Lisbon the 25 June"
    I want the Quickest way to  get the city Prague after reading the first line and
    Lisbon after reading  the second.
    Thks
    N

  • Quickest way to resizing 30 documents

    I have 30 documents (all in all 400 odd pages) that all need completely re artworking to a smaller size - but not one that is the same proportions height and width. As a starting point I want to resize all documents to 50% - is there an apple script or similar software that will make this task simple.
    Any help would be welcome.
    Thanks

    I would hesitate to let any script run rampage on this.
    By far the easiest way is one of printing the document to PDF and placing that, or directly placing the original ID document as image, into a new document of the correct size. But that restraints you to proportional sizing only -- your 50% should work fine, but if you enter a different scale for horizontal and vertical, you will see text and images distort. And (since you might ask), no, a script would not help here.
    There are a couple of scripts that place entire PDFs automatically (search in the script repository).
    There is no easy way to reformat a document. The only help you can get is by enabling Layout Adjustment, and its use is very limited. It does its work flawless when done with plain text only (say, a novel), but as soon as you have images and/or tables in floating boxes, it does resize them (in a haphazard way -- tables are not re-fitted) but no way it can keep them "on the same page as the reference", or something like that.
    If you are looking for the quickest way, and don't care about quality, go for proportionally resized PDFs (using a script). If you do care for some quality, do not resize them out of proportion -- and you can still use a script. But if you want to do any editing at all -- no matter how tiny -- you will have to resize and reformat the document manually, no way around it.

  • Copy from staging table to multiple tables

    We are using an SSIS package to fast load our data source into a staging table for processing.
    The reason we are using a staging table is that we need to copy the data from staging to our actual DB tables (4 in total), and the insertion order matters as we have foreign key relationships.
    When copying the data from our staging table, should we enumerate through all the records and use an insert-select method for each row or is there a more efficient way to do this?

    Our raw data source is a mdb file and we are using SSIS to fast load into SQL Server, and we are looking to transform the data set into 3 tables (using a stored proc):
    Site (SiteID, Name)
    Gas (ID, Date, Time, GasFIeld1, GasField2....., SiteID)
    GenSet (ID, Date, Time, GenSetField1, GenSetField2.....,
    SiteID)
    Each record in our raw data source contains a Name field which identifies the Site. We only need to add a new site to the Site table if it does not already exist. This is already coded and working using insert-select and NOT EXISTS.
    We now need to iterate over all records and extract a subset of data for the Gas table and extract a subset of data for the GenSet table and link each row with the
    associated SiteID, using Name field.
    The insertion order should be Site table first then remaining tables.
    Are you saying it would be better to transform this data using SSIS and not to use a staging table and stored procedure?
    I would prefer using staging + sp appproach here as that would involve set based logic and would be faster performance wise
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • What is the quickest way to find reports that run a particular DB?

    Hi,
    I have numerous reports running on multiple database. What is the quickest way to find all reports running on a particular Database?
    I am new to this and do not have much idea.
    Any information is appreciated!
    thanks
    shankar

    Hello Shanka,
    If you are talking about Crystal Reports
    I recommend to post this query to the [Crystal Reports Design|SAP Crystal Reports; forum.
    This forum is dedicated to topics related to the creation and design of Crystal Report documents. This includes topics such as database connectivity, parameters and parameter prompting, report formulas, record selection formulas, charting, sorting, grouping, totaling, printing, and exporting but also installation and registering.
    It is monitored by qualified technicians and you will get a faster response there.
    Also, all Crystal Reports Design queries remain in one place and thus can be easily searched in one place.
    Best regards,
    Falk

Maybe you are looking for