Non permitted characters-data loading

In our system, we have a data issue where many records present in the entry level tables (Extraction Layer) have non-permitted characters in text fields.
we introduced a routine to filter these - but already existing ones will bring problems during the init.
for this we need a very tricky abap.
we have a FM to replace wrong characters.
So now I want to loop a whole databasetable and use the FM against a certain table field
We did something like that  with abap already. But the biggest issue is that the tables consist of several million records - so we run into memory problems.
can you please provide a suggestion on how to handle this?
Your help will be appriciated

hi  USER1249,
what about to use standard BW functionality - permitted characters - TA RSKC.
Via this you can maintain a set of permitted characters which will not be rejected by BW while loading.
BR
m./

Similar Messages

  • Non-Cumulative KF data loading

    Hi,
    I have to copy the data from a non-cumulative cube to it's copy cube(Created using the copy cube option). I have created the DTP and directly loaded the data to copy cube.
    When i compare the data between these two cubes, it is not same for the non cumulative key figures.
    How can i load the data to overcome the data mismatch. Any special care to be taken to load the non-keyfigure values. (i can see one extra option in DTP i.e 'Initial Non cumulative for Non Cumulative values', i tried this, but it is loading only non cumulative values, other keyfigure data is not loading).
    Regards,
    Vishnu

    Hi,
    1) Determine in which way you wish to evaluate the non-cumulative value. If you only wish to evaluate the non-cumulative, or the non-cumulative changes as well as the non-cumulative using a particular time period, choose modeling as a non-cumulative value with non-cumulative changes. If you also want to evaluate the inflows and outflows separately, then choose modeling as a non-cumulative key figure with inflows and outflows.
    2) The non-cumulative change, or inflows and outflows, which you require for the definition of the non-cumulative value, must already be created as InfoObjects if the non-cumulative value is created. Therefore, firstly define either the non-cumulative change, or the in- and outflows as normal cumulative values where necessary. These key figures must correspond to the non-cumulative values, which still have to be defined, in the type definition. The non-cumulative changes, or inflows and outflows, must have u2018summationu2019 as aggregation and as exception aggregation.
    3) Define the non-cumulative value by assigning to it the previously defined non-cumulative change, or in and outflows.
    For more info on Non-Cumulative KF go through the below link
    http://help.sap.com/saphelp_bw30b/helpdata/en/80/1a6305e07211d2acb80000e829fbfe/content.htm
    Regards,
    Marasa.

  • Hexidecimal Non-Allowed Characters in a Unicode System

    We have a function module that we've written to replace non-permitted characters with a space in transfer rules.  We see a lot of invisible hexidecimal characters coming in free form text fields.  This work great for English.  However, we have a Unicode system with other languages installed.  We are also getting the hex characters in other character sets. 
    Has anyone dealt with this issue and if so what was you solution?
    Thanks!
    Al

    Hello aLaN,
    how r u ?
    Hey we have faced the problem with Hexadecimal characters, but not the same issue. In our case the problem was in the Source System. In the DB Tables we had some unwanted characters, that was showing some errors while data loading, particularly ERROR 18.
    So we resolved it by changing the Source System data.
    I have already posted for the hexadecimal issue.... the replies was
    I think this is related to Invalid character issues.. or SPaces setting in RSKC..
    may be you want to look at eh following post..
    Re: invalid characters
    /people/siegfried.szameitat/blog/2005/07/18/text-infoobjects-part-1
    Example:
    let us say..
    1. Check in RSKC for allowed characters..
    2. Add a code in the update rule to restrict the texts contains..
    !"%&''()*+,-./:;<=>?_0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' are allowed characters in RSKC transaction then other than the above character is 'Invalid' including the smaller case letters and will throw the hex.. error.
    since this is from database system even 'NULL' datatype from there is not visble to the eyes and can cause the failure.
    Hope this helps
    Best Regards....
    Sankar Kumar
    91 98403 47141

  • NonAscii Characters after data load

    Oracle 10g R2 and HPUX - Excell 2007.
    I am loading one table of data from Microsoft Excell to SQL Server 2008 and then to from SQL Server 2008 to Oracle 10g R2 database.
    I am accomplishing the above task using SSIS package(ETL tool).
    When I point SSIS package to the production database(Oracle database) the few data in few colums has non ascii characters.
    When the same SSIS package is pointed to the test Oracle database there is no data issue. Oracle test and production databases are on two different boxes. Why the same Excell data and SSIS package is not working on production box? Is there any initial parameters affects this data load? Or any code base I have to change?
    Thank you for your help.
    smith

    The OS on both boxes are the same.
    HP-UX B.11.11 U (64). When I look at the patch level there are more than 100 files to compare.
    select * from NLS_DATABASE_PARAMETERS;
    Running the above query resulted in 20 rows as output.
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_LANGUAGE     AMERICAN
    NLS_TERRITORY     AMERICA
    NLS_CURRENCY     $
    NLS_ISO_CURRENCY     AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CHARACTERSET     WE8ISO8859P1
    NLS_CALENDAR     GREGORIAN
    NLS_DATE_FORMAT     DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_SORT     BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_COMP     BINARY
    NLS_LENGTH_SEMANTICS     BYTE
    NLS_NCHAR_CONV_EXCP     FALSE
    NLS_RDBMS_VERSION     10.2.0.2.0
    The above query result is same bit by bit on production and test box.
    Is the load process exactly the same ?
    I just change the connection point in SSIS to point to production database. Also the table structure (data types, etc what I can see through SQL Developer) in production and test datasbe are same.
    What else be the reason for this?
    Smith

  • Removing non-English characters from data.

    Ours is global system with some data with non-English characters. We want to download file by removing this non-English characters.
    Any suggestions how we can remove these non-English characters from file..?

    The FM u said
         Replace non-standard characters with standard characters
       Functionality
         SCP_REPLACE_STRANGE_CHARS processes a text so that it only contains
         simple characters. Special characters and national characters are
         replaced in such a way that the text remains reasonably legible.
         The character set 1146 is used by default. In this case the following
         replacements are made, for example:
          Æ ==> AE        (AE)
          Â ==> A         (Acircumflex)
          Ä ==> Ae        (Adieresis)
          £ ==> L         (sterling)
         Note that the new text can be longer than the old.
    So i dont think it ll be useful for eliminating the sp. chars.
    U have to check each and every alphabet with std 26 alphabets
    Thanks & Regards
    vinsee

  • Loading Non-English Characters using VBA and BAPI

    Hi Experts,
    I am trying to load Non-English characters (Chinese, Korean, Japanese, etc.) into a SAP Table using BAPI and VBA. I have set the connection language and codepage values but when I run the tool, the non-English characters display as ????? or #####. Do you know how to fix this issue?
    Thanks!

    If your language is a unicode tehn you need to change the options  like IN SAP you need to change it to unicode  in the initial screen Customize local layout(ALT F12) options 118  --> Encoding ....

  • SQL*Loader-350: Illegal combination of non-alphanumeric characters

    Hi all,
    how to skip a column in control file.
    i.e.
    I have a table with 10 columns and FAT file contains 11 columns.
    how to skip a last column ?
    and
    I added extra column to Table and I changed the control file too.
    but i am getting error.
    SQL*Loader-350: Syntax error at line 115.
    Illegal combination of non-alphanumeric characters
    thanks in Advance.

    Tom Kyte has an elaborate solution on his page:
    http://osi.oracle.com/~tkyte/SkipCols/index.html
    Or post the table description and the control file, so we can have a look at it.

  • Data load into SAP ECC from Non SAP system

    Hi Experts,
    I am very new to BODS and I have want to load historical data from non SAP source system  into SAP R/3 tables like VBAK,VBAP using BODS, Can you please provide steps/documents or guidelines on how to achieve this.
    Regards,
    Monil

    Hi
    In order to load into SAP you have the following options
    1. Use IDocs. There are several standard IDocs in ECC for specific objects (MATMAS for materials, DEBMAS for customers, etc., ) You can generate and send IDocs as messages to the SAP Target using BODS.
    2. Use LSMW programs to load into SAP Target. These programs will require input files generated in specific layouts generated using BODS.
    3. Direct Input - The direct input method is to write ABAP programs targetting on specific tables. This approach is very complex and hence a lot of thought process needs to be applied.
    The OSS Notes supplied in previous messages are all excellent guidance to steer you in the right direction on the choice of load, etc.,
    However, the data load into SAP needs to be object specific. So targetting merely the sales tables will not help as the sales document data held in VBAK and VBAP tables you mentioned are related to Articles. These tables will hold sales document data for already created articles. So if you want to specifically target these tables, then you may need to prepare an LSMW program for the purpose.
    To answer your question on whether it is possible to load objects like Materials, customers, vendors etc using BODS, it is yes you can.
    Below is a standard list of IDocs that you can use for this purpose to load into SAP ECC system from a non SAP system.
    Customer Master - DEBMAS
    Article Master - ARTMAS
    Material Master - MATMAS
    Vendor Master - CREMAS
    Purchase Info Records (PIR) - INFREC
    The list is endless.........
    In order to achieve this, you will need to get the functional design consultants to provide ETL mapping for the legacy data to IDoc target schema and fields (better to ahve sa tech table names and fields too). You should then prepare the data after putting it through the standard check table validations for each object along with any business specific conversion rules and validations applied. Having prepared this data, you can either generate flat file output for load into SAP using LSMW programs or generate IDoc messages to the target SAPsystem.
    If you are going to post IDocs directly into SAP target using BODS, you will need to create a partner profile for BODS to send IDocs and define the IDocs you need as inbound IDocs. There are few more setings like RFC connectivity, authorizations etc, in order for BODS to successfully send IDocs into the SAP Target.
    Do let me know if you need more info on any specific queries or issues you may encounter.
    kind regards
    Raghu

  • Data load fails from DB table - No SID found for value 'MT ' of characteris

    Hi,
    I am loading data into BI from an external system (oracle Database).
    This system has different Units like BG, ROL, MT (for Meter). While these units are not maintaned in R3/BW. They are respectively BAG, ROLL, M.
    Now User wants a "z table" to be maintained in BW, which has "Mapping between external system Units and BW units".
    So that data load does not fail. Source system will have its trivial Units, but at the time of loading, BW Units are loaded.
    For example -
    Input Unit (BG) -
    > Loaded Unit in BW (BAG)
    Regards,
    Saurabh T.

    Hello,
    The table T006 (BW Side) will have all the UOM, only thing is to make sure that all the Source System UOM are maintained in it. It also have fields for Source Units and target units as you have mentioned BG from Source will become BAG. See the fields MSEHI, ISOCODE in T006 table
    If you want to convert to other units then you need to implement Unit Conversion.
    Also see
    [How to Report Data in Alternate Units of Measure|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b7b2aa90-0201-0010-a480-a755eeb82b6f]
    Thanks
    Chandran

  • Permitted characters 'ALL_CAPITAL'

    We need to accept a large number of unusual characters, and hit the limit on the number of characters we can enter.  Note 173241 explains that you can use ALL_CAPITAL (on its own) to get round the limit.  This seems to work fine, but I'd like to know exactly what ALL_CAPITAL permits.  I beleive that it accepts any characters that are accepted by the installed codes pages except miniscules letters.  I presume it rejects non-displayable characters.
    Anybody know how ALL_CAPITAL works?

    Hello,
    You created this message in 2004. I am wondering you remember it.
    Your description is very interesting for me. Now we have plan to implement BW3.5 Unicode version which we will share in Asia/Pacific. So that our user will use it in Japanese, Korea, English and Germany.
    We face data load problem between Japanese source system and BW Unicode system. We can solve this problem, as we set 'ALL_CAPITAL'.
    My question is this 'ALL_CAPITAL' is varid for all language.
    You mentioned the set of allowed characters depended on the codepage.     
    I am wondering the codepage means Logon language or system code page which is Unicode. If it is system code page, 'ALL_CAPITAL' may contain all language included in Unicode.

  • Non English characters conversion issue in LSMW BAPI Inbound IDOCs

    Hi Experts,
    We have some fields in customer master LSMW data load program which can
    contain non-English characters. We are facing issues in LSMW BAPI
    method with non-English characters Conversion. LMSW steps read and
    conversion are showing the non-English characters properly with out any
    issue. While creating inbound IDOCs most of the non-English characters
    replaced with '#' and its causing issues in creating customer master data in
    system. In our scenario customer data with non-English characters in
    the first name, last name and address details. Any specific setting
    needs to be done from our side? Please suggest me to resolve this issue.
    Thanks
    Rajesh Yadla

    If your language is a unicode tehn you need to change the options  like IN SAP you need to change it to unicode  in the initial screen Customize local layout(ALT F12) options 118  --> Encoding ....

  • URLEneQuery encoding is failing for some non english characters

    While creating a URLEneQuery we are getting error com.endeca.navigation.InternalException: No support for 8-bit urls.
    This error happens when the query string has some non english characters. (eg: Á).

    UrlENEQuery is designed around processing URL data, and URLs are not permitted non-ASCII characters in their production. To represent non-ASCII characters they must be %-encoded in URLs according to their byte(s) representation in a particular character-encoding, and you should prefer UTF-8 for URLs. So your LATIN CAPITAL LETTER A WITH ACUTE (U+00C1) should appear as %C3%81 in your URL, then UrlENEQuery should be able to process that character.

  • Scanning files for non-unicode characters.

    Question: I have a web application that allows users to take data, enter it into a webapp, and generate an xml file on the servers filesystem containing the entered data. The code to this application cannot be altered (outside vendor). I have a second webapp, written by yours truly, that has to parse through these xml files to build a dataset used elsewhere.
    Unfortunately I'm having a serious problem. Many of the web applications users are apparently cutting and pasting their information from other sources (frequently MS Word) and in the process are embedding non-unicode characters in the XML files. When my application attempts to open these files (using DocumentBuilder), I get a SAXParseException "Document root element is missing".
    I'm sure others have run into this sort of thing, so I'm trying to figure out the best way to tackle this problem. Obviously I'm going to have to start pre-scanning the files for invalid characters, but finding an efficient method for doing so has proven to be a challenge. I can load the file into a String array and search it character per character, but that is both extremely slow (we're talking thousands of LONG XML files), and would require that I predefine the invalid characters (so anything new would slip through).
    I'm hoping there's a faster, easier way to do this that I'm just not familiar with or have found elsewhere.

    require that I predefine the invalid charactersThis isn't hard to do and it isn't subject to change. The XML recommendation tells you here exactly what characters are valid in XML documents.
    However if your problems extend to the sort of case where users paste code including the "&" character into a text node without escaping it properly, or they drop in MS Word "smart quotes" in the incorrect encoding, then I think you'll just have to face up to the fact that allowing naive users to generate uncontrolled wannabe-XML documents is not really a viable idea.

  • Troubleshooting 9.3.1 Data Load

    I am having a problem with a data load into Essbase 9.3.1 from a flat, pipe-delimited text file, loading via a load rule. I can see an explicit record in the file but the results on that record are not showing up in the database.
    * I made a special one-off file with the singular record in question and the data loads properly and is reflected in the database. The record itself seems to parse properly for load.
    * I have searched the entire big file (230Mb) for the same member combination, but only come up with this one record, so it does not appear to be a "last value in wins" issue.
    * Most other data (610k+ rows) appears to be loading properly, so the fields, in general, are being properly parsed out in the load rule. Additionally, months of a given item are on separate rows, and other rows of the same item are loading properly and being reflected in the database. As well as other items are being loaded properly in the months where this data loads to, so, it is not a metadata-not-existing issue.
    * The load is 100% successful according to the non-existent error file. Also, loading the file interactively results in the file showing up under "loaded successfully" (no errors).
    NOTE:
    The file's last column does contain item descriptions which may include special characters including periods and quotes and other special characters. The load rule moves the description field to the earlier in the columns, but the file itself has it last.
    QUESTION:
    Is it possible that the a special character (quote??) in a preceding record is causing the field parsing to include the CR/LF, and therefore the next record, into one record? I keep thinking that if the record seems to fine alone, but is not fine where it sits amongst other records, that it may have to do with preceding or subsequent records.
    THOUGHTS??

    Thanks Glenn. I was too busy looking for explicit members that I neglected thinking through implicit members. I guess I was thinking that implied members don't work if you have a rules file that parses out columns...that a missing member would just error out a record instead of using the last known value. In fact, I thought that (last known value) only worked if you didn't use a load rule.
    I would prefer some switch in Essbase that requires keys in all fields in a load rule or allows last known value.

  • Flex, xml, and non-English characters

    Hello! I have a Flex web app with AdvancedDataGrid. And I use httpService component to load some data to grid. The .xml file contains non-english characters in attributes (russian in my case) like this:
    <?xml version="1.0" encoding="utf-8" ?>
       <Autoparts>
        <autopart  DESCRIPTION="Барабан">
    </Autoparts>
    And when i run app, AdvancedDataGrid display it like "Ñ&#129;ПÐ". How can i fix it? I try to change encoding="utf-8" with some another charsets, bun unsuccesfully. Thank you.

    Try changing the xml structure by using CDATA instead of having the russian part as an attribute and see if that makes any difference.
    What I meant is use something like this:
    <?xml version="1.0" encoding="utf-8" ?>
       <Autoparts>
        <autopart>
           <description><![CDATA[Барабан]]></description>
      </autopart>
    </Autoparts>
    instead of the current xml.

Maybe you are looking for

  • Can I install Windows 7 System Builder Addition on a MacBook Pro?

    Can I install Windows 7 System Builder Addition on a MacBook Pro?

  • Red Dot

    Hey There, I have an iPhone 3G. On the phone icon, a red dot has appeared on the voicemail & it won't go away. I don't have voicemail on my account so I don't understand why it would be showing the red dot. I took it to 2 geniuse meetings at two diff

  • Project Server 2013 Default Pivot Excel report

    Hi All, We have configured the default excel report available in the BI Center. When we open the project overview report and select the project names we are getting the below error. But when we open the report in excel it is working fine. I have foll

  • Invoke Exception:exception on JaxRpc invoke:Http Transport Error:

    Hi, I created a BPEL process which access a Java class using WSIF. I deploy the Process to my local BPEL console. When i enter an input value and click on Post XML Message button, the process errors out. The Error is in the Invoke Section, <summary>e

  • Should i upgrade my iPhone 4 from 5.1.1 to 6.1.2?

    I haven't upgraded my Iphone 4 in a while because I heard that the OS upgrade would significantly reduce performance. now that there have been many new versions, should I upgrade to the latest OS 6.1.2?? what's the good, the bad, and the ugly of it?