Missing linefeed in extract

Hi,
we have a problem to extract an encoded(base64) pdf-document from a xmltype. The pdf-document is embedded in a node in a xml document. When we extract the valueof that node the carrige return and line feeds is missing. The statement looks like this:
select extract( iipax_datapart ,'/msg:Anropsmeddelande/msg:Nyttolast/stgans:Stamningsansokan/stgans:Dokument/stgans:Dokumentinnehall/text()','xmlns:msg="se/rif/si/meddelande/v1" xmlns:stgans="se/rif/am/stamningsansokan/v1"').getclobval()
from ink_iipax
where status = 'NEW';
We also tried getblobval(NLS_CHARSET_ID('AL32UTF8')) to an blob.
If we load the same file from disk and encode it in a PL/SQL test script to a clob/blob, the size differs with the the same amount as rows in the files. The test file is correct. So my conclusion is that the extract from xmltype must remove the CR/LF.
The database is 11g/r2.
Any idea? Kjell

Hi,
I don't see any problem using this (simple) example :
SQL> select * from v$version
  2  ;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE     11.2.0.1.0     Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL>
SQL> WITH sample_data AS (
  2   select xmltype('<ns1:root xmlns:ns1="test.ns1" xmlns:ns2="test.ns2">
  3  <ns2:doc>TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJ5IGhpcyByZWFzb24sIGJ1dCBieSB0aGlz
  4  IHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbmltYWxzLCB3aGljaCBpcyBhIGx1c3Qgb2Yg
  5  dGhlIG1pbmQsIHRoYXQgYnkgYSBwZXJzZXZlcmFuY2Ugb2YgZGVsaWdodCBpbiB0aGUgY29udGlu
  6  dWVkIGFuZCBpbmRlZmF0aWdhYmxlIGdlbmVyYXRpb24gb2Yga25vd2xlZGdlLCBleGNlZWRzIHRo
  7  ZSBzaG9ydCB2ZWhlbWVuY2Ugb2YgYW55IGNhcm5hbCBwbGVhc3VyZS4=
  8  </ns2:doc>
  9  </ns1:root>') doc
10  from dual
11  )
12  SELECT extract( t.doc,
13                  '/ns1:root/ns2:doc/text()',
14                  'xmlns:ns1="test.ns1", xmlns:ns2="test.ns2"'
15         ).getClobVal()
16         as base64str
17  FROM sample_data t
18  ;
BASE64STR
TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJ5IGhpcyByZWFzb24sIGJ1dCBieSB0aGlz
IHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbmltYWxzLCB3aGljaCBpcyBhIGx1c3Qgb2Yg
dGhlIG1pbmQsIHRoYXQgYnkgYSBwZXJzZXZlcmFuY2Ugb2YgZGVsaWdodCBpbiB0aGUgY29udGlu
dWVkIGFuZCBpbmRlZmF0aWdhYmxlIGdlbmVyYXRpb24gb2Yga25vd2xlZGdlLCBleGNlZWRzIHRo
ZSBzaG9ydCB2ZWhlbWVuY2Ugb2YgYW55IGNhcm5hbCBwbGVhc3VyZS4=
In 11g, now, the recommended method is to use XMLCast/XMLQuery for single node extraction (or XMLTable to extract sequence).
For example :
WITH sample_data AS (
select xmltype('<ns1:root xmlns:ns1="test.ns1" xmlns:ns2="test.ns2">
<ns2:doc>TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJ5IGhpcyByZWFzb24sIGJ1dCBieSB0aGlz
IHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbmltYWxzLCB3aGljaCBpcyBhIGx1c3Qgb2Yg
dGhlIG1pbmQsIHRoYXQgYnkgYSBwZXJzZXZlcmFuY2Ugb2YgZGVsaWdodCBpbiB0aGUgY29udGlu
dWVkIGFuZCBpbmRlZmF0aWdhYmxlIGdlbmVyYXRpb24gb2Yga25vd2xlZGdlLCBleGNlZWRzIHRo
ZSBzaG9ydCB2ZWhlbWVuY2Ugb2YgYW55IGNhcm5hbCBwbGVhc3VyZS4=
</ns2:doc>
</ns1:root>') doc
from dual
SELECT xmlcast(
         xmlquery('declare namespace ns1="test.ns1"; declare namespace ns2="test.ns2"; (::)
                   /ns1:root/ns2:doc'
                   passing t.doc
                   returning content
         ) as clob
       as base64str
FROM sample_data t
;Could you try this method with your data?

Similar Messages

  • Missing Data after extraction

    Hi,
    I have a generic datasource in R3, delivering some 20 plus fields. When I check in RSA3, I get all data, for full extraction or even selective extraction.
    I have replicated and activated the DS in BW. Now when I extract the data in BW for a single document, I get all the information in PSA, but when a full extraction without giving any filters, some of the data is missing in the extraction.
    Seems nothing wrong in R3 and nothing wrong in BW, what else am I misssing

    Hi,
      Check the below.
    1) Remove selections in Info package.
    2) Check the filters in DTP.
    3) Compare the number of records PSA and RSA3 in ECC.
    4) Compare the number of records PSA and data target
    5) If you are missing records from PSA to data  target chck filters in DTP and routines.
    Regards
    Prasad

  • Missing orders in extraction from 02

    Dear all,
    A few weeks ago we re-initialized our setup tables for the 02 application.  (block documents). 
    Today we notice we are missing orders which were created closely after the initialization. 
    I tried to check the extraction with RSA3, but this also does not return any data.  Does anyone have an idea what happened and how we can correct?
    thanks,
    Tom

    Hi Tom,
    it sounds like your missed documents were created during your init procedure and not after...but, anyway, what's done is done !
    Your records were not collected by a delta and were not included in your setup job...so, you have to check if these records are really BW relevant (from a technical point of view!): run again your setup job and then verify (RSA3) if your records are loaded...
    Hope it helps !
    Bye,
    Roberto

  • Missing data while extraction

    Hi,
    I extracted the GL data from R/3 to BW and extraction went smoothly. One GL data(G/L Account) didn't loaded to the BW (even in PSA also).
    When i was checked in the RSA3 for that datasource that particular GL existed.after that i loaded the only GL thro' infopackage(newly created infopackge) and got loaded.
    How come it is possible because the deltas are not missing any data.
    Also how the relationship between RSA3 and INIT/Delta loads.
    RSA3 Contains all the data(INIT and delta) for reconciliation purpose.
    Please give some light on this.
    Thanks

    Hi,
    If it is LO DS RSA3 will takes the Data From Setup tables, for others it will take data from base tables in ECC. See the below Artciles to get more information on RSA3,RSA7, SMQ1.
    Checking the Data using Extractor Checker (RSA3) in ECC Delta Repeat Delta etc...
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/80f4c455-1dc2-2c10-f187-d264838f21b5&overridelayout=true 
    Data Flow from LBWQ/SMQ1 to RSA7 in ECC and Delta Extraction in BI
    http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/business-intelligence/d-f/data%20flow%20from%20lbwq_smq1%20to%20rsa7%20in%20ecc%20and%20delta%20extraction%20in%20bi.pdf
    Thanks
    Reddy

  • In CRM queue missing for delta extraction

    Hello everybody,
    I have following problem with extraction of 0CRM_SRV_PROCESS_I delta entries from CRM system.
    I have intialized the extractior from BW and have following status on CRM:
    in bwa7 i see the extractor with delta active. The BDOC Name is BUS_TRANSACTION_MESSAGE.
    The field Queue exists is not ticked. it is empty.
    Now I change the one transaction. i.e. I change the requested end date.
    Now in transaction smw01 (bdoc messages) i see my transaction in status: Confirmed (fully processed) - green.
    The queue name is: CSA_ORDER_8000000000.
    Now I go to smq1and ..... can not find the queue.
    As result the delta is not populated to BW.
    In addition to this: in SMQS I see destination CSA*. Could it be somehow linked to my problem.
    So, my question is: how shoul I register the queu to make it work.
    p.

    Hi,
    How do you load the data in the development system?
    - you set up the update mode in the infopackage. There you must customize "Initialization"
    - you have to filled the setup table in the production system without laoding the data in the BW? then this step was for nothing. Filling the setup table and do the INIT in the BW, this must be together - during no changing will be done in your R/3 SD.
    Sven

  • POS DM: missing customizing for Extracting Purchase Conditions

    Hi there,
    I am looking for an hint from a retail especialy POS DM specialist.
    What I am doing is...
    1. I am trying to supply DSO Objects...
       0RT_DS07 - Purchase Conditions at Purchasing Organization Level
       0RT_DS06 - Purchase Conditions at Site Level
    ...with purchasing conditions, via standard extractor programms...
       0RT_PURCHPRICE_PUORG_ATTR
       0RT_PURCHPRICE_PLANT_ATTR
    2. I open the InfoPackage (e.g. INIT_0RT_PURCHPRICE_PLANT_ATTR_Purchase Prices at Store Leve) and enter selection criteria condition type = PB00, and start the data extraktion
    3. the following error message will be raised, saying:
    Diagnosis
         Condition type PB00 is not maintained in DM Customizing.
    System Response
         If you wish to use this condition type in purchase price analysis, you
         must process the DM Customizing.
    Procedure
         Use an allowed condition type or process this condition type in DM
         Customizing. To do this, you process view V_RDMT_P_COND_TY in
         transaction SM30 .
    Procedure for System Administration
    My Problem now is, that View mentioned to maintain condition type in DM Customizing  V_RDMT_P_COND_TY does not exist. Neither can I find any similar sounding table or option SPRO to maintain the condition type.
    Does anybody hase an idea how I can extrakt condition type PB00 for POS Data Management?
    Thx a lot for your help
    Michael

    Look in LBWE under the "Purchasing" node. Then expand the 2LIS_02_ITM extractor until you get to the "events" node. Under that node you will see which events will trigger a record(s) sent to BI. To further your understanding look into 0STORNO and ROCANCEL data elements as well as 0RECORDMODE. Those InfoObjects and data elements will provide control flags for your updates as records come into BI from PUR. Within the help files for MM-PUR you will find a document explaining the usage of 0RECORDMODE and 0STORNO for record management.
    Hopefully that will get you started

  • I am trying to extract metadata from essbase into a flat file using ODI.

    I have 2 questions in this regard :
    Some of the account members storage property is missing in the extract.the reason which i suspect is due to the parent child sorting that did not happen while extracting.How do i do this.I do not see this option when i select IKM hyperion Essbase Metadata to SQL....
    I have many account members that have more than one UDA's upto 5 UDA's.But in my extrcat only one UDA appears.How do i incorporate all the UDA's in a single column,sperated by a comma,the extract file as such is semicolon seperated,mainly because of this reason and some alias descriptions have comma's in the source system
    ODi is extracting metadata in a descending order.How do i change it to sort records in parent child order
    Thanks,
    Lingaraj
    Edited by: user649227 on 2009/06/10 6:50 AM

    Hi,
    There was an issue with early versions of the KM around the storage property, this has since been resolved. I recommend upgrading to the latest release of ODI or have a look through the patches.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best way to extract this info?

    Hello:
    I have information stored in xmltype as follows:
    <INFO>
    <ASG>
    <LMT id="A1">
    <LMT_VAL>7</LMT_VAL>
    </LMT>
    <LMT id="B1">
    <LMT_VAL>6</LMT_VAL>
    </LMT>
    </ASG>
    </INFO>
    The above info may be dynamic. Example, for some types, There may be additional LMT element with a value of C1 and D1. So what is the best way to extract all of them (A1, B1, C1, D1) - even though in some cases only A1 and B1 may be present?
    Do I have to extract like:
    select ...
    where
    t1.xml.col.extract('/INFO/ASG/LMT/LMT_VAL/@id').getStringVal() = 'A1'
    Do I have a condition like this where ... = A1
    Any help appreciated.

    Hi,
    one left round bracket is missing
    TABLE(xmlsequence(extract (<b> (</b><---missing SELECT xml_column FROM your_table)....If you have more then one row in the table it's not working!
    SQL> CREATE TABLE test_xml(
      2   id NUMBER
      3  ,xml_col XMLType)
      4  /
    Table created.
    SQL> INSERT INTO test_xml VALUES(1,xmltype('<INFO>
      2  <ASG>
      3  <LMT id="A1">
      4  <LMT_VAL>7</LMT_VAL>
      5  </LMT>
      6  <LMT id="B1">
      7  <LMT_VAL>6</LMT_VAL>
      8  </LMT>
      9  </ASG>
    10  </INFO>'))
    11  /
    1 row created.
    SQL> INSERT INTO test_xml VALUES(2,xmltype('<INFO>
      2  <ASG>
      3  <LMT id="A1">
      4  <LMT_VAL>7</LMT_VAL>
      5  </LMT>
      6  <LMT id="B1">
      7  <LMT_VAL>6</LMT_VAL>
      8  </LMT>
      9  </ASG>
    10  </INFO>'))
    11  /
    1 row created.
    SQL> SELECT extractvalue(column_value, '/LMT/@id') new_id,
      2         extractvalue(column_value,'/LMT/LMT_VAL') new_value
      3  FROM TABLE(xmlsequence(extract ((SELECT xml_col FROM test_xml), '/INFO/ASG/LMT')));
    FROM TABLE(xmlsequence(extract ((SELECT xml_col FROM test_xml), '/INFO/ASG/LMT')))
    ERROR at line 3:
    ORA-01427: single-row subquery returns more than one row
    SQL> spool off;

  • Missing materials in 0CO_PC_ACT_02 Material Valuation: Per. Ending Inventor

    Hi,
    I've this doubt using this extractor.
    Do you know why some material /Value Type combination are missing during this extraction?
    I need to know the Total Stock amount. When I check the data that is been extracted I've seen that some materials lines are missing when comparing with MM60 transaction in ECC.
    Example in ECC:
    Material  Value Type  Amount
    MAT1                            123
    MAT1      C1                 125
    MAT1      C2                 166
    But in the extractor:
    Material  Value Type  Amount
    MAT1      C1                 125
    MAT1      C2                 166
    I only see this.
    Please note that for other materials everything looks normal. Any suggestion?
    Cheers
    JS

    To be clear,
    It will  show the  PUP price of last month  for that material if the material movements are zero in this month. Hope you have some in  previous period.
    Regards,
    Rajesh

  • Extracted Audio Files

    How can I control where PP CC saves extracted audio files that I have edited with Audition?
    Currently they are being stored with temporary files in the PP folder on my main drive. I want them to be saved in the same location as the original file. It seems to me that this should be the default behavior. This has caused me serious problems on several occasions. Thinking I had copied all my media files to my laptop to work offsite, I later discovered that I was missing all the extracted audio files.
    I searched help for an answer to this but found none. I found several others through Google who were confounded by this behavior but no answers or solutions.
    Thanks,
    Rod

    File / Project Settings / Scratch Disk / Captured audio.

  • Records is not coming in Delta for Inventory Cubes.

    Hi..All
    We are working on BW3.5 SP11 since 01.07.2005.
    We are facing a very serious problem in Inventory management data extraction from R/3 since 24.08.2005.
    Our active data sources of Inventory Mangement are as follow.
         2LIS_03_BF
         2LIS_03_BX
         2LIS_03_UM
    We were regularly getting records through delta as per given schedule after intialisation on 01.07.2005 to till 23.08.2005.
    On 24.08.2005 -- Delta schedule we had find out there are some records that were created in R/3 did not come in BW.
    On 24.08.2005 -- records loaded in PSA was 20550. The same is appering in RSA3(EXTRACTORS) also.
    The Last material document loaded was 4900154552 date 23.08.2005 time 16:56:05 of R/3, but there are many other material documents that were created on the same day but are missing in the extraction.
    On 25.08.2005 delta schedule records are getting 0 (Zero). Not a single record is getting in RSA3 (Extractor).
    on 26.08.2005 situation is still as it is. No record is coming.
    Where as our average records per day is approx. 20000 nos.
    The update mode for inventory is Unserialised V3 update, and the update program in R/3 is running perfectly fine as per schdule with no errors.
    We could not understand why suddenly stop the extraction of records FOR INVENTORY MANAGEMENT. Because rest all
    data sources like Purchasing,Sales and Distribution,Quality Mangement, and PP is working fine on same date.
    We checked the data in extractor checker with giving UPDATE MODE "F"-Transfer all requested data.
    Still it is showing 0 (Zero Records) for data sources 2LIS_03_BF,2LIS_03_UM.
    The situation is still as it is as of now.
    This is very crutial for our business. All inventory reports are showing data which is 3 days old.
    Please give your advice on this issue. And how to get the records in PSA from R/3.
    Regards,
    Navin

    Hi Ravindra,
    Thanks for giving input. The problem got solved.
    There was a problem with QRFC Table. Tcode LBWQ
    Status was wait.
    Thanks for giving feedback.
    Regards,
    Navin

  • Load to psa is in yellow status only in quality n production system

    Hi All,
    We have created a generic FM which wil take date parameter as input and fetch records for that specific month. This FM works perfectly fine in dev server, but when i run the same in quality and production the load status is still in yellow status.
    FYI, data records are fetched/extracted correctly and data package individually is in green status. But under extraction flow i see the following message  " Missing message: Selection completed  "  at this stage the status is in yellow.
    If i check the job in r/3 system then the last job log entry is
    " Asynchronous transmission of info IDoc 3 in task 0003 (1 parallel tasks) "
    Kindly suggest what can be done here to resolve this. We have stuck for quite long on this.
    Thanks & Regards,
    Anup

    hi
    debug the FM program , if you wont find any then try this.
    you may missing messages in extraction process
    try this
    trun load to red and re-start and check the source system extraction process also check in the source system envio  t Rfc-- in the source system ( to know the status)
    Missing message: Selection completed
    Error: 'Missing message: Selection completed' - URGENT
    http://wiki.sdn.sap.com/wiki/pages/viewpage.action?pageId=153780896
    Data Loads into BI system
    check this may help you.
    Edited by: Srikanth.T on Dec 22, 2011 9:47 AM

  • 0FI_AR_4 Datasource, Delta

    Hi Experts,
    we are using 0FI_AR_4 datasource, this is delta enable, but the problem is we can run delta just once a day.
    Can any one please let me know how to change this so that i can run the delta more than once a day.
    Any document or a link would be of great help.
    Thanks in advance.
    Ananth

    hi Ananth,
    take a look Note 991429 - Minute Based extraction enhancement for 0FI_*_4 extractors
    https://websmp203.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=991429&_NLANG=E
    Symptom
    You would like to implement a 'minute based' extraction logic for the data sources 0FI_GL_4, 0FI_AR_4 and 0FI_AP_4.
    Currently the extraction logic allows only for an extraction once per day without overlap.
    Other terms
    general ledger  0FI_GL_4  0FI_AP_4  0FI_AR_4  extraction  performance
    Reason and Prerequisites
    1. There is huge volume of data to be extracted on a daily basis from FI to BW and this requires lot of time.
    2. You would like to extract the data at a more frequent intervals in a day like 3-4 times in a day - without extracting all the data that you have already extracted on that day.
    In situations where there is a huge volume of data to be extracted, a lot of time is taken up when extracting on a daily basis. Minute based extraction would enable the extraction to be split into convenient intervals and can be run multiple times during a day. By doing so, the amount of data in each extraction would be reduced and hence the extraction can be done more effectively. This should also reduce the risk of extractor failures caused because of huge data in the system.
    Solution
    Implement the relevant source code changes and follow the instructions in order to enable minute based extraction logic for the extraction of GL data. The applicable data sources are:
                            0FI_GL_4
                            0FI_AR_4
                            0FI_AP_4
    All changes below have to be implemented first in a standard test system. The new extractor logic must be tested very carefully before it can be used in a production environment. Test cases must include all relevant processes that would be used/carried in the normal course of extraction.
    Manual changes are to be carried out before the source code changes in the correction instructions of this note.
    1. Manual changes
    a) Add the following parameters to the table BWOM_SETTINGS
                             MANDT  OLTPSOURCE    PARAM_NAME          PARAM_VALUE
                             XXX                  BWFINEXT
                             XXX                  BWFINSAF            3600
                  Note: XXX refers to the specific client(like 300) under use/test.
                  This can be achieved using using transaction 'SE16' for table
                             'BWOM_SETTINGS'
                              Menue --> Table Entry --> Create
                              --> Add the above two parameters one after another
    b) To the views BKPF_BSAD, BKPF_BSAK, BKPF_BSID, BKPF_BSIK
                           under the view fields add the below field,
                           View Field  Table    Field      Data Element  DType  Length
                           CPUTM       BKPF    CPUTM          CPUTM      TIMS   6
                           This can be achieved using transaction 'SE11' for views
                           BKPF_BSAD, BKPF_BSAK , BKPF_BSID , BKPF_BSIK (one after another)
                               --> Change --> View Fields
                               --> Add the above mentioned field with exact details
    c) For the table BWFI_AEDAT index-1  for extractors
                           add the field AETIM (apart from the existing MANDT, BUKRS, and AEDAT)
                           and activate this Non Unique index on all database systems (or at least on the database under use).
                           This can achived using transaction 'SE11' for table 'BWFI_AEDAT'
                               --> Display --> Indexes --> Index-1 For extractors
                               --> Change
                               --> Add the field AETIM to the last position (after AEDAT field )
                               --> Activate the index on database
    2. Implement the source code changes as in the note correction instructions.
    3. After implementing the source code changes using SNOTE instruction ,add the following parameters to respective function modules and activate.
    a) Function Module: BWFIT_GET_TIMESTAMPS
                        1. Export Parameter
                        a. Parameter Name  : E_TIME_LOW
                        b. Type Spec       : LIKE
                        c. Associated Type : BKPF-CPUTM
                        d. Pass Value      : Ticked/checked (yes)
                        2. Export Parameter
                        a. Parameter Name  : E_TIME_HIGH
                        b. Type Spec       : LIKE
                        c. Associated Type : BKPF-CPUTM
                        d. Pass Value      : Ticked/checked (yes)
    b) Function Module: BWFIT_UPDATE_TIMESTAMPS
                        1. Import Parameter (add after I_DATE_HIGH)
                        a. Parameter Name  : I_TIME_LOW
                        b. Type Spec       : LIKE
                        c. Associated Type : BKPF-CPUTM
                        d. Optional        : Ticked/checked (yes)
                        e. Pass Value      : Ticked/checked (yes)
                        2. Import Parameter (add after I_TIME_LOW)
                        a. Parameter Name  : I_TIME_HIGH
                        b. Type Spec       : LIKE
                        c. Associated Type : BKPF-CPUTM
                        d. Optional        : Ticked/checked (yes)
                        e. Pass Value      : Ticked/checked (yes)
    4. Working of minute based extraction logic:
                  The minute based extraction works considering the time to select the data (apart from date of the document either changed or new as in the earlier logic).The modification to the code is made such that it will consider the new flags in the BWOM_SETTINGS table ( BWFINEXT and BWFINSAF ) and the code for the earlier extraction logic will remain as it was without these flags being set as per the instructions for new logic to be used(but are modified to include new logic).
    Safety interval will now depend on the flag BWFINSAF (in seconds ; default 3600) and has  a default value of 3600 (1 hour), which would try to ensure that the documents which are delayed in posting due to delay in update modules for any reason. Also there is a specific coding to post an entry to BWFI_AEDAT with the details of the document which have failed to post within the safety limit of 1 hour and hence those would be extracted as a changed documents at least if they were missed to be extracted as new documents. If the documents which fail to ensure to post within safety limit is a huge number then the volume of BWFI_AEDAT would increase correspondingly.
    The flag BWFINSAF could be set to particular value depending on specific requirements (in seconds , but at least 3600 = 1 hour)  like 24 hours / 1 day = 24 * 3600 => 86400.With the new logic switched ON with flag BWFINEXT = X the other flags  BWFIOVERLA , BWFISAFETY , BWFITIMBOR are ignored and BWFILOWLIM , DELTIMEST will work as before.
    As per the instructions above the index-1 for the extraction in table BWFI_AEDAT would include the field AETIM which would enable the new logic to extract faster as AETIM is also considered as per the new logic. This could be removed if the standard logic is restored back.
    With the new extractor logic implemented you can change back to the standard logic any day by switching off the flag BWFINEXT to ' ' from 'X' and extract as it was before. But ensure that there is no extraction running (for any of the extractors 0FI_*_4 extractors/datasources) while switching.
    As with the earlier logic to restore back to the previous timestamp in BWOM2_TIMEST table to get the data from previous extraction LAST_TS could be set to the previous extraction timestamp when there are no current extractions running for that particular extractor or datasouce.
    With the frequency of the extraction increased (say 3 times a day) the volume of the data being extracted with each extraction would decrease and hence extractor would take lesser time.
    You should optimize the interval of time for the extractor runs by testing the best suitable intervals for optimal performance. We would not be able to give a definite suggestion on this, as it would vary from system to system and would depend on the data volume in the system, number of postings done everyday and other variable factors.
    To turn on the New Logic BWFINEXT has to be set to 'X' and reset back to ' ' when reverting back. This change has to be done only when there no extractions are running considering all the points above.
                  With the new minute based extraction logic switched ON,
    a) Ensure BWFI_AEDAT index-1 is enhanced with addition of AETIM and is active on the database.
    b) Ensure BWFINSAF is atleast 3600 ( 1 hour) in BWOM_SETTINGS
    c) Optimum value of DELTIMEST is maintained as needed (recommended/default value is 60 )
    d) A proper testing (functional, performance) is performed in standard test system and the results are all positive before moving the changes to the production system with the test system being an identical with the production system with settings and data.
    http://help.sap.com/saphelp_bw33/helpdata/en/af/16533bbb15b762e10000000a114084/frameset.htm

  • Xml to Oracle (Update more than one row)

    Hi,
    I want to update more than one row in table from .xml file. My xml file is as follows:
    <ROOT>
    <PROFILE PROFILEMASTER_PKEY="54" DB_MSTR_PKEY="2" PROFILE_NAME="Bhushans" DELIMETER="~" PRE_PROCESSOR="1" POST_PROCESSOR="10" PRE_PROCESSOR_TYPE="1" POST_PROCESSOR_TYPE="2" GROUPID="2" />
    <PROFILEDETAILS PROFILEMASTER_PKEY="54" TARGET_SOURCE_TABLE="FM_FEEDVALIDATION_LU" COLUMN_NAME="FEEDVALIDATION_ID" DATA_TYPE="NUMBER" DATA_SIZE="22" START_POSITION="12" END_POSITION="22" COLUMNORDER="1" PROFILEDETAILS_PKEY="399"/>
    <PROFILEDETAILS PROFILEMASTER_PKEY="54" TARGET_SOURCE_TABLE="FM_FEEDVALIDATION_LU" COLUMN_NAME="CHANGE_TYPE" DATA_TYPE="VARCHAR2" DATA_SIZE="1" START_POSITION="12" END_POSITION="144" COLUMNORDER="5" PROFILEDETAILS_PKEY="403"/>
    <OPTIONS PROFILEMASTER_PKEY ="54" LDR_SYNTX_DTLS_PKEY ="19" OPTIONVALUE="@" PROFILE_CFILE_PKEY="337" />
    <OPTIONS PROFILEMASTER_PKEY ="54" LDR_SYNTX_DTLS_PKEY ="19" OPTIONVALUE="~" PROFILE_CFILE_PKEY="336" />
    </ROOT>
    To update according to xml file, I have written following procedure. My procedure updates the table if u r updating 1 row. If you try to update more than 1 row, I mean .xml file contains more than 1 row then my procedure doesn't work. Please help to solve this problem.
    Procedure:
    create or replace procedure fm_prc_xml_dup_up
    as
    f utl_file.file_type;
    s varchar2(2000);
    v varchar2(3000);
    xml XMLType;
    v_pmpk number;
    v_sdtl_pk number;
    chng_typ VARCHAR2(20);
    type r1 is ref cursor;
    rcur r1;
    v1 varchar2(120);
    v2 number;
    begin
    f := utl_file.fopen('CITI', 'S.XML', 'R');
    loop
    utl_file.get_line(f, s);
    v := v || ' ' || s;
    end loop;
    exception
    when no_data_found then
    utl_file.fclose(f);
    xml := xmltype(v);
    SELECT extract(xml, 'ROOT/CHANGE/@CHANGETYPE').getstringval()
    INTO CHNG_TYP
    FROM DUAL;
    UPDATE FM_PROFILEMAST
    set db_mstr_pkey = extract(xml, 'ROOT/PROFILE/@DB_MSTR_PKEY').getnumberval(),
    profile_name = extract(xml, 'ROOT/PROFILE/@PROFILE_NAME').getstringval(),
    file_type = extract(xml, 'ROOT/PROFILE/@FILE_TYPE').getstringval(),
    delimiter = extract(xml, 'ROOT/PROFILE/@DELIMETER').getstringval(),
    pre_processor = extract(xml, 'ROOT/PROFILE/@PRE_PROCESSOR').getstringval(),
    post_processor = extract(xml, 'ROOT/PROFILE/@POST_PROCESSOR').getstringval(),
    pre_processor_type = extract(xml, 'ROOT/PROFILE/@PRE_PROCESSOR_TYPE').getstringval(),
    post_processor_type = extract(xml, 'ROOT/PROFILE/@POST_PROCESSOR_TYPE').getstringval(),
    groupid = extract(xml, 'ROOT/PROFILE/@GROUPID').getstringval(),
    change_type = 'U',
    change_by = chng_typ,
    change_dt = default,
    active_flag = default
    WHERE profilemaster_pkey = extract(xml, 'ROOT/PROFILE/@PROFILEMASTER_PKEY').getnumberval();
    UPDATE FM_PROFILEDET
    SET target_source_table = extract(xml, 'ROOT/PROFILEDETAILS/@TARGET_SOURCE_TABLE').getstringval(),
    column_name = extract(xml, 'ROOT/PROFILEDETAILS/@COLUMN_NAME').getstringval(),
    data_type = extract(xml, 'ROOT/PROFILEDETAILS/@DATA_TYPE').getstringval(),
    data_size = extract(xml, 'ROOT/PROFILEDETAILS/@DATA_SIZE').getnumberval(),
    start_position = extract(xml, 'ROOT/PROFILEDETAILS/@START_POSITION').getnumberval(),
    end_position = extract(xml, 'ROOT/PROFILEDETAILS/@END_POSITION').getnumberval(),
    change_by = chng_typ,
    change_dt = default,
    columnorder = extract(xml, 'ROOT/PROFILEDETAILS/@COLUMNORDER').getstringval(),
    column_format = extract(xml, 'ROOT/PROFILEDETAILS/@COLUMN_FORMAT').getstringval(),
    nullable = extract(xml, 'ROOT/PROFILEDETAILS/@NULLABLE').getstringval(),
    change_type ='U',
    active_flag = default
    WHERE profiledetails_pkey = extract(xml, 'ROOT/PROFILEDETAILS/@PROFILEDETAILS_PKEY').getstringval();
    UPDATE FM_PROFILE_CFILE
    SET profilemaster_pkey = extract(xml, 'ROOT/PROFILE/@PROFILEMASTER_PKEY').getnumberval(),
    ldr_syntx_dtls_pkey = extract(xml, 'ROOT/OPTIONS/@LDR_SYNTX_DTLS_PKEY').getstringval(),
    val = extract(xml, 'ROOT/OPTIONS/@OPTIONVALUE').getstringval(),
    change_by = chng_typ,
    change_dt = default,
    sub_line_seq = extract(xml, 'ROOT/OPTIONS/@SUB_LINE_SEQ').getstringval(),
    change_type = 'U',
    active_flag = default
    where profile_cfile_pkey = extract(xml, 'ROOT/OPTIONS/@PROFILE_CFILE_PKEY').getnumberval();
    END;

    Hi Bhushan,
    one where clause is missing in the main update.
    update fm_profiledet
    set (....)
    =(select ....)
    where id in (select your profiledetails_pkey from the xml). <--this where clause were missing.
    if xml extracting is too slow(xml very large) then you can create a procedure where exract your data from the xml and then update rows in for loop.
    something like this
    create procedure up_xmls(p_xml xmltype) is
    cursor cur_xml(p_xml xmltype) is
    select ......<--here you extract your xml
    begin
    for r_row in cur_xml(p_xml) loop
    update fm_profiledet set target_source_table=r_row.target_source_table
    where profiledetails_pkey=r_row.profiledetails_pkey;
    end loop;
    end;this should work:
    SQL> drop table fm_profiledet;
    Table dropped.
    SQL> create table fm_profiledet(
      2   profiledetails_pkey number
      3  ,target_source_table varchar2(100)
      4  ,column_name varchar2(100)
      5  ,data_type varchar2(100)
      6  ,data_size number
      7  ,start_position number
      8  ,change_type varchar2(100)
      9  )
    10  /
    Table created.
    SQL>
    SQL>
    SQL> insert into fm_profiledet
      2  values(399,'test','test1','test2',1,2,'A')
      3  /
    1 row created.
    SQL>
    SQL>
    SQL> insert into fm_profiledet
      2  values(403,'test3','test4','test5',3,4,'B')
      3  /
    1 row created.
    SQL> insert into fm_profiledet
      2  values(443,'test3','test4','test5',3,7,'B')
      3  /
    1 row created.
    SQL>
    SQL>
    SQL> select * from fm_profiledet;
    PROFILEDETAILS_PKEY TARGET_SOU COLUMN_NAM DATA_TYPE  DATA_SIZE START_POSITION CHANGE_TYP                               
                    399 test       test1      test2              1              2 A                                        
                    403 test3      test4      test5              3              4 B                                        
                    443 test3      test4      test5              3              7 B                                        
    SQL>
    SQL> create or replace directory xmldir as '/home/ants';
    Directory created.
    SQL>
    SQL>
    SQL>
    SQL> update fm_profiledet fm
      2  set (target_source_table,column_name, data_type, data_size, start_position,change_type)
      3  =(
      4    select  target_source_table
      5          , column_name
      6          , data_type
      7          , data_size
      8          , start_position
      9          , change_type
    10    from(
    11      select
    12        extractValue(value(x),'/PROFILEDETAILS/@PROFILEDETAILS_PKEY') profiledetails_pkey
    13      , extractValue(value(x),'/PROFILEDETAILS/@TARGET_SOURCE_TABLE') target_source_table
    14      , extractValue(value(x),'/PROFILEDETAILS/@COLUMN_NAME') column_name
    15      , extractValue(value(x),'/PROFILEDETAILS/@DATA_TYPE') data_type
    16      , extractValue(value(x),'/PROFILEDETAILS/@DATA_SIZE') data_size
    17      , extractValue(value(x),'/PROFILEDETAILS/@START_POSITION') start_position
    18      ,'U' change_type
    19     from
    20      table(xmlsequence(extract(xmltype(bfilename('XMLDIR','prof.xml')
    21                                      ,nls_charset_id('AL32UTF8'))
    22                               , '/ROOT/PROFILEDETAILS'))) x
    23    ) s
    24  where s.profiledetails_pkey=fm.profiledetails_pkey)
    25  where
    26    fm.profiledetails_pkey in (select
    27        extractValue(value(x),'/PROFILEDETAILS/@PROFILEDETAILS_PKEY') profiledetails_pkey
    28     from
    29      table(xmlsequence(extract(xmltype(bfilename('XMLDIR','prof.xml')
    30                                      ,nls_charset_id('AL32UTF8'))
    31                               , '/ROOT/PROFILEDETAILS'))) x
    32  );
    2 rows updated.
    SQL>
    SQL>
    SQL> select * from fm_profiledet;
    PROFILEDETAILS_PKEY TARGET_SOU COLUMN_NAM DATA_TYPE  DATA_SIZE START_POSITION CHANGE_TYP                               
                    399 FM_FEEDVAL FEEDVALIDA NUMBER            22             12 U                                        
                        IDATION_LU TION_ID                                                                                 
                    403 FM_FEEDVAL CHANGE_TYP VARCHAR2           1             12 U                                        
                        IDATION_LU E                                                                                       
                    443 test3      test4      test5              3              7 B                                        
    SQL> spool off;Ants
    Message was edited by:
    Ants Hindpere

  • Issues with a customized extractor running 'Full' loads.

    Hi Gurus,
    Is it possible to check at what date/time a particular data record has been posted in the VBPA table [Sales Document Partner Function Table]?
    My issue is that we have a customized extractor that pulls data from both the VBAP & VBPA tables based on VBELN [Doc No] & POSNR [Item]. We are using an 'Inner Join' on these two tables, so if either the Doc No or Item is not present in the VBPA table at the time of extraction, the data record will be skipped and we are facing several such missing documents in BW.
    The loads are 'Full' based on 'ERDAT' & 'AEDAT' on the VBAP table for the last 3 days i.e. if a Sales Document is either Created or Changed in the last 3 days, it will be picked up by the extractor. So my reasoning for the missing data records in BW is that the VBPA table  entries for that particular Order had been updated at a later date and hence data was missed out during extraction [because of the 'Inner Join' based on Doc No & Item].
    But since VBPA does not have a created on or changed on field, I am unable to find out when exactly a data record has been updated in the VBPA table.
    Any suggestions that you can provide will be highly appreciated!
    Thanks

    Hi,
    I feel your table join logic is not correct. Please discuss with your functional team once and try to revise the logic. Otherwise you may keep missing some records all the time. Test the extractor alot in RSA3 before replicating in BW.
    Regards,
    Suman

Maybe you are looking for

  • Need help using method check_changed_data correctly

    I'm using method check_changed_data in my ALV program to update the grid with changes the user makes. The method works fine for this purpose. But I also want to know if any data actually changed. Can anyone give me an example? FORM update_vendor . do

  • After creating a contained database, getting a login failure error while trying to connect to it.

    After creating a contained database and a user with passowrd under the same database, I tried connecting to the contained database. I entered the server name, login credentials and went to the connection properties tab to select the contained databas

  • How to use more than one JVM's

    hi friends, if there are two javas i.e. 1.3 and 6 installed on my machine and I am using Java 6 for my IDE and 1.3 as default. Now after building an application if i distribute it to other pc's that are using java 1.3 as default and does nt have java

  • Error with export | ORA-4063

    Hi all, I am facing an issue with export backup of oracle database version 10.2.0.1 on RHEL 5 server. export terminates with warnings and the export log shows following error: EXP-00008: ORACLE error 4063 encountered ORA-04063: package body "SYS.DBMS

  • Hyperion Business rules

    HI All, The scenario is such that I have two set of data forms. One is for monthly level budgeting and the other set of forms is for Yearly budgeting data. Now, when I punch the data into the Yearly values, it has to evenly distributed into the month