ETL-Process: Date function

Hey Folks!
I have the following table in my PSA
Customer  |  Sales  |   Date   |  Week  |  Month |
4711|   100   |2006-01-09|     |      |
0815       |   200   |2006-02-12|     |      |
Now, I'm looking for some functions which can fill the column Week and Month to get the following result:
Customer  |  Sales  |   Date   |  Week  |  Month |
4711       |   100   |2006-01-09|   <b>2     </b>|   <b>1</b>      |
0815       |   200   |2006-02-12|   <b>6</b>    |   <b>2</b>      |
Do somebody know how I can solve this transformation problem??
Maybe there are some functions in ABAP like WeekofYear(Date) or MonthofYeaf(Date)??
Thanks for your help!!!
so long
Stephan

Hi Stephan,
if you post the data to a cube and use 0calmonth and ocalweek for the month and the week you will have a automatic conversion if you just assign the date to that fields in the update rules.
Otherwise:
0calmonth = <your date field>(6).
case <your date field>+4(2).
  when '01' or '02' or '03'.
    concatenate <your date field>(4) '1' into 0calweek.
endcase.
regards
Siggi

Similar Messages

  • Data/database availability between ETL process

    Hi
    I am not sure whether this is right forum to ask this question. But still I am requesting for help.
    We have a DSS database of size 1Tb. The batch process runs between 12:00 hours till 6:00 am. the presentation/reporting schema is of size 400 GB. Through the nightly batch job, most of the tables in the presentation layer get truncated/modified. Due to business nature and requirement, this presentation layer needs to be available 24*7. As the ETL process modify/changes database, hence the system is not available between 12:00 till 6 am.
    The business requirement is that: Before ETL process starts, take a backup of the presentation layer, transfer the application to this backed-up area and then let the ETL process proceed. Once the ETL process finished, move the application to this latest area.
    Based on the size of the database and schema size, does any one , how to take backup/restore the presentation layer in the same database in a efficient way.

    There are a couple of possibilities here. You certainly need two sets of tables, one for loading and the other for presentation. You could use synonyms to control which one is the active reporting set and which is the ETL set, and switch them over at a particular time. Another approach would be to use partition exchange to exchange data and index segments between the the two table sets.
    I would go for the former method myself.

  • Unique Index Error while running the ETL process

    Hi,
    I have Installed Oracle BI Applications 7.9.4 and Informatica PowerCenter 7.1.4. I have done all the configuration steps as specified in the Oracle BI Applications Installation and Configuration Guide. While running the ETL process from DAC for Execution Plan 'Human Resources Oracle 11.5.10' some tasks going to status Failed.
    When I checked the log files for these tasks, I found the following error
    ANOMALY INFO::: Error while executing : CREATE INDEX:W_PAYROLL_F_ASSG_TMP:W_PRL_F_ASG_TMP_U1
    MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
    W_PRL_F_ASG_TMP_U1
    ON
    W_PAYROLL_F_ASSG_TMP
    INTEGRATION_ID ASC
    ,DATASOURCE_NUM_ID ASC
    ,EFFECTIVE_FROM_DT ASC
    NOLOGGING PARALLEL
    with error java.sql.SQLException: ORA-12801: error signaled in parallel query server P000
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    EXCEPTION CLASS::: java.lang.Exception
    I found some duplicate rows in the table W_PAYROLL_F_ASSG_TMP with the combination of the columns on which it is trying to create INDEX. Can anyone give me information for the following.
    1. Why it is trying to create the unique index on the combination of columns which may not be unique.
    2. Is it a problem with the data in the source database (means becoz of duplicate rows in the source system).
    How we need to fix this error. Do we need to delete the duplicate rows from the table in the data warehouse manually and re-run the ETL process or is there any other way to fix the problem.

    This query will identify the duplicate in the Warehouse table preventing the Index from being built:
    select count(*), integration_id, src_eff_from_dt from w_employee_ds group by integration_id, src_eff_from_dt having count(*)>1;
    To get the ETL to finish issue this delete to the W_EMPLOYEE_DS table:
    delete from w_employee_ds where integration_id = '2' and src_eff_from_dt ='04-JAN-91';
    To fix it so this does not happen again on another load you need to find the record in the Vision DB, it is in the PER_ALL_PEOPLE_F table. I have a Vision source and this worked:
    select rowid, person_id , LAST_NAME FROM PER_ALL_PEOPLE_F
    where EFFECTIVE_START_DATE = '04-JAN-91';
    ROWID PERSON_ID
    LAST_NAME
    AAAWXJAAMAAAwl/AAL 6272
    Kang
    AAAWXJAAMAAAwmAAAI 6272
    Kang
    AAAWXJAAMAAAwmAAA4 6307
    Lee
    delete from PER_ALL_PEOPLE_F
    where ROWID = 'AAAWXJAAMAAAwl/AAL';

  • SQL server agent jobs throws random errors while the ETL process works fine.

    Hi,
    I have this problem with SQL agent jobs.
    We went from sql2008 to sql2012 and migrated SSIS without problems. The ETL process runs fine and the OLAP cubes are processed.
    I have a job which calls the master execution dtsx for a particulair customer. When the ETL load and OLAP is processed it should go to the next customer. The problem i have is that the agent logs some errors for random customers. I tried to do only two clients
    in one job this works then i add the third client and then it fails (log wise) for a customer which did run successfully before when there were only two customers.
    Despite the error message the ETL did run, there were no duplicate keys and OLAP was processed???
    Im very close to pull all my hair, because some combinations like two customers work, and placing these two customers with a third one it then fails again. (again cubes are processed and data is integer yet i keep getting these annoying errors in the log)
    Perhaps someone could help me further. 
    -Miracles are easy, the impossible takes a bit longer-

    Just double-click on the Agent job, then click on Steps property page (on your left), you must be able to see a list of steps with the action "On Failure" which you should examine.
    Arthur My Blog

  • Bi Content in 7.0 - Did SAP Convert all content to use new ETL process?

    Hello.
    I am working with BI 7.0 for the first time.  I have 8 years of BW 3.x experience.  We are starting a new BI project and we are in the process of activating Business Content.  We have BI_CONT level 7 (the latest as of today I believe).  I appears that SAP has not converted over its Business Content to use the new ETL process (Transformations, DTP's, etc).  For example, I am looking at basic areas such as AP, AR, SD, MM, GL, etc and the BI content is still using the 3.x ETL (transfer rules, update rules). Is something not installed right on my system or has SAP not converted it's content to the new ETL yet?
    Thanks in advance for your help.

    Jose,
    Some new content is released using the new DTP.  Most content remains in its delivered 3.x format.  If you want to use DTPs for this content you have to manually create the DTPs after installing the 3.x objects.  If you right-click on an InfoCube, you will see some new options in the context menu, including "Create Transformation...," and "Create Data Transfer Process...."  Context menu for DataSouce now contains a "Migrate" option that will allow you to migrate to the new DTP from 3.x.  Also, other objects such as Transfer Rules and Update Rules contain context menu options for "Create Transformation."
    Hope this helps.

  • Queries with date functions using PreparedStatement for multiple DB

    I am developing application that uses DB independant queries. I am using preparedstatement to process the queries. I need to use date functions for query selection criteria.
    for eg.
    selecting the list of employees who had joined in the last 15 days
    selecting list of employees who had joined between two dates etc.
    where Date Joined field is a Timestamp value. To extract date no DB specific function can be used.
    If I use setMonth, setYear etc.. to set params in the pstmt the query becomes complex in the above case. Can any one throw some light on how to do the above in preparedstatement or any other better alternative.
    Tx a lot

    Hi,
    I did not mean that way. I presume that there is a timestamp value (may be a date too) column in the table. Then based upon your requirement (say before 15 days) pass the value as date (or time stamp) in the query as a parameter.
    String qry = "select * from myTable where join_date <= ?";
    stmt.setDate(1,myDate); // this is where you will have to manipulate the value to suit your DB timestamp or date value; you will have compatibility issues with util.Date and sql.Date so use Calendar class to satisfy.Feel free to mail me if you need further clarifications to [email protected]
    Cheers,
    Sekar

  • FIM 2010 R2 SP1 Reporting ETL Process for SCSM 2012 R2?

    Hi,
    First question: is FIM 2012 R2 SP1 Reporting supported on System Centre 2012 R2 or only System Centre 2012? I have followed the MS FIM Reporting deployment guide, and everything seems to work, except the ETL process (2nd question below)
    Second question: if it is supported, than how do we get the ETL process defined here (http://technet.microsoft.com/en-us/library/jj133844%28v=ws.10%29.aspx)  to work with these versions? The powershell script provided doesn't work on SC 2012 R2.
    Third question: how do we force the whole process so we can view data in the FIM Reports? as at present there is no data in any of the reports even after I manually ran these SCDW jobs: Extract_dw_SCSMServer, transform.common, load.common
    Thank you,
    DW

    Although it could work, if not officially announced as supported - it is not "officially" supported, so you're deploying it on your own risk and MS won't help you if any problem occurs. Please be aware of that.
    Keep trying

  • ETL Extraction data error

    Hi guys,
    I'm usinf BOE 3.1. I have a problem during extraction data from R3 to *.dat file. The error is is File
    skb\STG1_FT_DAY_PP_PRODUCTION.dat cannot be opened. Job cancel after exception error.  For you info my folderfor all *dat file  is in SAP DB server (IBM server). from sap server we ftp  *.dat file to BO server (HP Server). 
    The problem is sometime different file occured. Actually i've already post this issue to OSS, i've already follow their suggestion but still problem.  Is that any setup in SAP SB server ? One more thing i success extract the data in QAS server.
    I hope somebody can guide me and help me how to solve this issue.
    Thank You.
    Regards
    -akma-

    HI,
    just to clarify: in the BusinesObjects Enterprise system you not storing data.
    1. ECC - > SAP BI(back-end) for storing ECC and other legacy data->BO reporting tools like Crystal Reports, WebI, Xcelsius etc.,
    >> for sure possible
    2. ECC -> BO(back-end) for storing ECC and other legacy data using DI for setting up the ETL process -> BO reporting tools like Crystal Reports, WebI, Xcelsius etc.,
    >> you would then use something that is called a Rapid Mart for example
    3. ECC ->use BI and BO as back-end system -> BO reporting tools like Crystal Reports, WebI, Xcelsius etc.
    >> yes.
    Both backends have a different purpose. BusinessObjects is reporting and BW is a datawarehouse.
    Ingo

  • Delimiting Dates Functionality in Z table

    Hello, i need to create the delimiting dates functionality, which will trigger proper warning message and will handle overlapping time periods.
    Can anyone help me on this?
    I've already enabled and coded the two Standard Modules in the Custom table's Screen Logic:-
    PROCESS BEFORE OUTPUT.
      MODULE liste_initialisieren.
      LOOP AT extract WITH CONTROL
       tctrl_zxxm_pc CURSOR nextline.
        MODULE liste_show_liste.
        MODULE liste_deactivate.
      ENDLOOP.
    PROCESS AFTER INPUT.
      MODULE liste_exit_command AT EXIT-COMMAND.
      MODULE liste_before_loop.
      LOOP AT extract.
        MODULE liste_init_workarea.
        CHAIN.
          FIELD zxxm_pc-zz_tdate .
          FIELD zxxm_pc-z_poc .
          FIELD zxxm_pc-zz_fdate .
          FIELD zxxm_pc-zz_pc .
          MODULE set_update_flag ON CHAIN-REQUEST.
        ENDCHAIN.
        "INSERTED AJMAL.D 13/03/2008
        CHAIN.
          FIELD zxxm_pc-zz_fdate .
          FIELD zxxm_pc-zz_tdate .
          MODULE temp_delimitation ON CHAIN-REQUEST.
        ENDCHAIN.
        "END
        FIELD vim_marked MODULE liste_mark_checkbox.
        CHAIN.
          FIELD zxxm_pc-zz_tdate .
          FIELD zxxm_pc-z_poc .
          FIELD zxxm_pc-zz_fdate .
          MODULE liste_update_liste.
        ENDCHAIN.
      ENDLOOP.
      MODULE liste_after_loop.
    But its giving me dump.  Are there any other coding of configurations to make?

    Hello,
    Plz do not write code in the PBO/PAI of a TMG.
    You will lose all the code, once the TMG is regenerated.
    It's safe to handle a TMG through events, since the code exist in a separate include.
    Once the TMG is regenerated all you have to do is reattach the Subroutine.
    Regards,
    Remi

  • Custom ETL processes creation in OWB

    Hi, we are working in a Oracle Utilities BI implementation project.
    Some of the KPI identified, need a custom development - creation of complete ETL processes:
    - Extractor in CC&B (in COBOL)
    - Workflows in OWB
    - Configuration in BI
    We were able to create 4 custom ETL processes (2 fact and 2 related dimensions) including the COBOL extract program, the OWB entities (Files, Staging tables, Tables, Mappings and Workflows) and the BI tables.
    We already have the data in the BI database and in some cases we are able to configure zones to show the data in BI.
    We are facing some problems when we try to use a custom Fact and a custom Dimension together, for instance:
    Case 1. When we create a zone to show : Number of quotes - Measure 1 : tf=CM_CF_QUOTE.FACT_CNT func=SUM dt=END_DATE_KEY the graph is displayed.
    Case 2. When we create a zone to show : Number of accepted quotes - Measure 1 : tf=CM_CF_QUOTE.FACT_CNT func=SUM dt=END_DATE_KEY the and Fixed Dimensional Filter 1 : tf=CM_CD_QUOTE_TYPE.QUOTE_STATUS_CD oper==(10), the graph is displayed.
    Case 3. When we create a zone to show : Number of ongoing quotes - Measure 1 : tf=CM_CF_QUOTE.FACT_CNT func=SUM dt=END_DATE_KEY the and Fixed Dimensional Filter 1 : tf=CM_CD_QUOTE_TYPE.QUOTE_STATUS_CD oper==(20), the graph is displayed.
    Case 4. When we create a zone to show : Number of ongoing quotes - Measure 1 : tf=CM_CF_QUOTE.FACT_CNT func=SUM dt=END_DATE_KEY the and Fixed Dimensional Filter 1 : tf=CM_CD_QUOTE_TYPE.QUOTE_STATUS_CD oper==(20), the graph is displayed.
    Case 5. But when we create a zone to show : Number of quotes sliced by quote status - Measure 1 : tf=CM_CF_QUOTE.FACT_CNT func=SUM dt=END_DATE_KEY the and Dimensional Filter 1 : tf=CM_CD_QUOTE_TYPE.QUOTE_STATUS_CD, no graph is displayed.
    Case 6. A different problem occurs in the single fact. We try to show the number of processes of a given type. The type of the process is a UDDGEN. When we load the zone to show the graph the following error appears: Measure1 : tf=CM_F_CF_CSW.FACT_CNT func=SUM dt=ACTV_DATE_KEY and Fixed Dimensional Filter 1 : tf=CM_F_CF_CSW.UDDGEN1 oper==(ENTRADA)
    An error is displayed: No join defined between fact table CM_F_CF_CSW and dimension table CM_F_CF_CSW. The dimension tabelentered in the zone parameter could be invalid for this fact table.
    Does anyone had the same problem??????
    Edited by: user11256032 on 10/Jun/2009 11:51
    Edited by: user11256032 on 10/Jun/2009 11:54

    Hi user11256032, I just stumbled upon this by accident. The reason no-one has answered yet, is because it is in the wrong forum. (I can understand that you thought it belonged here.) Please post the question to the Oracle Utilities forum, which is here Utilities If that link doesn't work, go to Forum Home, then choose Industries, then Utilities. You may have to select "More ..." on Industries.
    Actually, I suspect there was an SR created for these, so your question may have been answered already.
    If you don't mind me asking, which customer is this for?
    Jeremy

  • Using user hold data function in FB65

    Dear Guru's
    In FB65, I input amount, gl account, amount in doc currency, calculate tax tax box and tax type. Then I select Hold data function. When I exit FB65 and re-enter again, all saved data is available except for the calculate tax tick box and tax type.
    Is there anyway to get this hold as well?
    Regards
    Shakeer

    Hi,
    This is standared functionality when you process final account document from Hold Doc user should Select Tax & Process the same.
    Clear Information about Hold Doc
    "With Hold Document, data which has been entered can be saved temporarily in order to continue the entries at a later time. Documents held by the system do not have to be complete. No account balances are updated and the data of the
    document is not available for evaluation. No document number is assigned. The person making the entries is asked to name the document after selecting the Hold Document function. The document can be found under this name at a later time."
    Regards
    Viswa

  • ETL processing Performance and best practices

    I have been tasked with enhancing an existing ETL process. The process includes dumping data from a flat file to staging tables and process records from the initial tables to the permanent table. The first step, extracting data from flat file to staging
    tables is done by Biztalk, no problems here. The second part, processing records from staging tables and updating/inserting permanent tables is done in .Net. I find this process inefficient and prone to deadlocks because the code loads the data from the initial
    tables(using stored procs) and loops through each record in .net and makes several subsequent calls to stored procedures to process data and then updates the record. I see a variety of problems here, the process is very chatty with the database which is a
    big red flag. I need some opinions from ETL experts, so that I can convince my co-workers that this is not the best solution.
    Anonymous

    I'm not going to call myself an ETL expert, but you are right on the money that this is not an efficient way to work with the data. Indeed very chatty. Once you have the data in SQL Server - keep it there. (Well, if you are interacting with other data
    source, it's a different game.)
    Erland Sommarskog, SQL Server MVP, [email protected]

  • ETL process steps in OWB

    Hi friends,
    <li> can i know the etl steps involved in owb. if so can you provide the etl steps now.
    <li> And also there is any diagram for owb, Covering owb etl process from source to target(including BI components) too. (like available for infomatica in a diagrammatical manner) like how it is taking data from source and how it is performing etl process and how it is transforming it to data warehouse and how BI components utilizing it.
    Thanks
    Regards,
    Saro.
    Your Answers will be marked.

    Hi Saro,
    OWB does not enforce some specific steps on you. You define your datawarehouse architecure according to your needs. OWB perfectly support you to
    + extract the data from the source systems or files
    + load into the database (staging)
    + process the data within the database, e.g. loading it from the staging area into the core (normalized or star schema)
    + load it into OLAP cubes
    + design and monitor the overall process using process flows
    Regards,
    Carsten.

  • ETL process

    hi all,
    I am new to OWB10g. I need to know what is exact ETL process to follow in OWB. And how to do the incremental loading. please tell me the Steps to follow??

    If you are referring to incremental in terms of time/date increments, you can accomplish this in a mapping as follows:
    1. SOURCE TABLE --> FILTER (filters out data so only the last 5 days, or last 1 week, or last 2 weeks of data will be accepted).
    2. FILTER --> TARGET TABLE 1(This can be a simple load type of truncate/insert)
    3. TARGET TABLE 1 --> Historical TARGET TABLE (This can be a load type of 'upsert' known as a merge or insert/update.)
    What happens here is a source table has only the last few days or weeks (you specify it) of data extracted. It then truncates the target table, and loads only the data that the filter extracted. This will be a small subset of the original large source data. Finally, you can merge this smaller subset into the historical table, which will merge based on certain matching criteria. This Historical table will never truncate, therefore it will hold all history. The initial TARGET TABLE 1 will only hold a small subset of data, and can only be queried when you need to look at only the most recent extraction quickly.
    -Greg

  • ETL process but no use RSA1

    Hi All,
    How to ETL process but no use RSA1.
    Thanks...
    Tom Pemalang

    Hi Eddy,
       Is the file on the presentation serever? If so get the file loaded to the application server and gat a process chain created for the data load ... then you need to give authorization for RSPC and get the data loaded via a Process Chain ...
      Once the file is on the application server .. you even have the option of setting up a background job..
    Hope this helps.
    bestr egards,
    Kazmi

Maybe you are looking for