Thesaurus expansion limit?

Hi,
on 8.1.7 (Solaris) I am running into the following problem with my custom thesaurus:
Whenever an NT-query expands to more than a few hundred terms, I get the following error message:
SQL> select count(*) from doctable where contains(text, 'NT(person,4,tiny_thes)')>0;
select count(*) from doctable where contains(text, 'NT(person,4,tiny_thes)')>0
ERROR at line 1:
ORA-29902: error in executing ODCIIndexStart() routine
ORA-20000: interMedia Text error:
DRG-50942: errors:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 1
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 1
Is there a limit for the expansion, and how is it defined? I couldn't find anything about this in the docs.

The expansion limit is 32000 bytes, which is a result of PL/SQL limits and how our thesaurus expansion
mechanism is currently coded. However, you are not supposed to get an error -- it is supposed to just fill up
the 32000 byte buffer and truncate after that -- the query should go through.
Bug 1426493 is very very similar, and what this was that that the expansion would give this error over 4096 bytes
or so. But this is fixed in the 8.1.7.1 patchset (RDBMS patchset 8.1.7.1B).
If you are running 8.1.7.1.0 (non-B) or, god forbid, 8.1.7.0, you might want to upgrade.
The latest 8.1.7 patchset is 8.1.7.4 -- I would highly recommend an upgrade to 8.1.7.3 if you can swing it.

Similar Messages

  • Entity Expansion Limit Reached!

    Hi experts!
    I use JAXB to parse the DBLP data, however, I received an error message as follows
    "org.xml.sax.SAXParseException: The parser has encountered more than "64,000" entity expansions in this document; this is the limit imposed by the application."
    Can someone give me a hint on how to increase the number of entity expansions allowed by SAXParse?
    Thanks!

    Set the system value -DentityExpansionLimit=128000.

  • DRG-51030: wildcard query expansion

    Hi,
    encountered an error when I ran this,
    SELECT /*+use_hash(t$oracle_text)*/
    count(documentid)
    FROM t$oracle_text
    WHERE CONTAINS (
    dummy,
    'near(((approv%=underwrit%=report%),(waive%=exception%=override%)),10)',
    1) > 0
    ERROR at line 1:
    ORA-29902: error in executing ODCIIndexStart() routine
    ORA-20000: Oracle Text error:
    DRG-51030: wildcard query expansion resulted in too many terms
    I have few question:
    1) what is this = sign in where clause.
    2)How to get rid of this error .
    3)is there any other alternative to run this without hitting wildcard query expansion limit.
    Thanks,

    Hello,
    There's a good discussion of that exception here -
    How to limit the number of search results returned by oracle text
    John
    http://jes.blogs.shellprompt.net
    http://apex-evangelists.com

  • How to use synonyms on multiple word search ?

    We use context with multiword search like this one :
    select * from my_table where contains(my_text,'the small building near the river')>0
    Now we have specific synonyms in a thesaurus. How do we write the contains clause ?
    contains(my_text,'syn(the,thes) and syn(small,thes) and syn(building,thes) and syn(near,thes) and syn (river,thes)')>0 does not fin the synonym for "small building"="house" for instance
    contains(my_text,'syn(the small building near the river,thes)')>0 does only for synonyms on the full sentence.
    More generally is there an Oracle Document which describes how to use SYN, FUZZY and combine them, since
    the reference documentation gives only limited information on this ?
    Have a nice day

    The thesaurus functionality is not currently built for stuff like this.
    if you want to combine fuzzy and thesaurus, I am assuming you want to do fuzzy first, to correct any misspelling,
    then thesaurus on the "corrected" spellings? You'd have to do something like:
    1. take the query and run ctx_query.explain to break it down and do the fuzzy expansion
    2. work through the fuzzy expansion and build a new query string by sticking SYN() around each
    expanded word
    As for thesaurus expansion and phrase, these are not compatible. Thesaurus expansions use "," and "|", and so
    you cannot have a phrase of thesaurus expansions.
    I see what you're getting at, but you would need sub phrase detection, phrase equivalence, etc., which is
    currently beyond the thesaurus function capability.
    You can use themes (ABOUT) on phrases, and it will do something like what you are describing. You might want
    to check that out.

  • BPC 7M SP4 EVDRE missing rows - Error is  "1004-Selection is too large.}

    Hello,
    On a customer who installed BPC 7 Ms SP4 I have on client Exception log the error:
    ===================[System Error Tracing]=====================
    [System Name]   : BPC_ExcelAddin
    [Job Name]         : clsExpand::applyDataRangeFormula
    [DateTime]          : 2009-07-17 09:44:13
    [Exception]
           Detail<sg     : {1004-Selection is too large}
    ===================[System Error Tracing End ]=====================
    When I see this error there is an EVDRE input schedule expanding which have some of the rows missing rows descriptions.
    This means there is a gap of missing row header formulas stating from second row to somewhere in the middle of the report.
    If I reduce the number of the resulting rows for the same EVDRE input schedule results are ok.
    Do you know some setting to fix this?
    I tried increasing the Maximum Expansion Limit for rows and columns in Workbook Options without success.
    Thank you.

    Hi all,
    I have the same problem with 7.0MS SP07. With 59 expanded members it works. With 60 expanded members it fails.
    Mihaela, could you explain us what is the purpose of the parameters you talk about?
    thanks,
    Romuald

  • Workbook Options in EVDRE on BPC10 MS

    Hi,
    We have an EVDRE that was built in v7 but has been setup to run successfully in v10.  The issue is that the EVDRE returns over 10K rows on one tab, and 100 rows on the second tab (multiple EVDRE tabs).  When we change the workbook options to have a max expansion limit of 15000, the report works, yet takes a very very long time (over 30 minutes)... We wanted to try and see if we can set these options on each tab separately, so it doesn't think it needs 15K rows on each tab of the report.
    Any thoughts/suggestions? 
    One workaround we have is to do an expand worksheet on the first EVDRE tab (since it's used for vlookups on the second tab) and then expand worksheet on the second tab... we've found that when we expand workbook, this is where we run into the huge performance hit and delays.
    Thanks!

    Hello,
    here r some explanations:
    1. DumpDataCache
    The content of the data cache is written in the log file EvDre_log.txt
    2. GroupExpansion
    See the link below
    3. PctInput
    Enforce a different percentage of input data to trigger SQL queries (default is 20%)
    4. QueryEngine
    Manual (or blank for Automatic)
    5. QueryType
    NEXJ,TUPLE (or blank for Automatic)
    6. QueryViewName
    Use a user-defined view for querying SQL data
    7. ShowComments
    Add an Excel comment in any DataRange cell with a formula, if the value retrieved from the database is different from the
    one displayed by the formula
    8. SQLOnly
    Force the query engine to only issue SQL queries
    This and more info u can find here:
    https://websmp101.sap-ag.de/~form/sapnet?_SHORTKEY=01200252310000085368&_SCENARIO=01100035870000000202&_OBJECT=011000358700000050972009E
    Hope this info will help u,
    Dzmitry

  • SAXParser Exception - NEED HELP IT'S URGENT

    Hi, I'm stuck in this, I have this program that works running a script called runPatito.sh
    but everytime I run it it throws me the following error:
    AxisFault
    faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.userException
    faultSubcode:
    faultString: org.xml.sax.SAXParseException: Parser has reached the entity expansion limit "64,000" set by the Application.
    faultActor:
    faultNode:
    faultDetail:
    {http://xml.apache.org/axis/}stackTrace: AxisFault
    faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.userException
    faultSubcode:
    faultString: org.xml.sax.SAXParseException: Parser has reached the entity expansion limit &quot;64,000&quot; set by the Application.
    faultActor:
    faultNode:
    faultDetail:
    org.xml.sax.SAXParseException: Parser has reached the entity expansion limit "64,000" set by the Application.
    at org.apache.axis.message.SOAPFaultBuilder.createFault(SOAPFaultBuilder.java:251)
    at org.apache.axis.message.SOAPFaultBuilder.endElement(SOAPFaultBuilder.java:168)
    at org.apache.axis.encoding.DeserializationContextImpl.endElement(DeserializationContextImpl.java:1015)
    at org.apache.xerces.parsers.AbstractSAXParser.endElement(Unknown Source)
    at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanEndElement(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source)
    Now , I've been searching and actually I found that I need to increase the system property "EntityExpansionLimit" but unfortunately I haven't been able to do it...
    I found that I need to run my program with java -DentityExpansionLimit=1000000 and "everything else" as it says in all the forums...
    The script that I'm using have this:
    #!/bin/sh
    java -cp patitoApp-patito-client com.patitoC.patito.PatitoClient cold.xml hot.xml
    My problem here is I don't know where to add that line (-DentityExpansionLimit=1000000) in order for this to work properly.
    I would appreciate some help here...

    Hi before explainig other things I wanted to thank you all for replying this post and for trying to help me...
    There are two scripts involved in the proccess, one is the server and one is the client, both of them are running with the modification suggested, but it does not make any difference...
    I still have the exception of the SAXParser with the 64,000
    I even modified the classes because I'm using DocumentBuildFactory and I put the entityExpansionLimit as a property of the object created as I found it...
    Here's the code of those lines:
    DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
              factory.setAttribute("http://apache.org/xml/properties/entity-expansion-limit",new Integer("1000000"));
              factory.setAttribute("http://apache.org/xml/properties/elementAttributeLimit",new Integer(20));
    I think I need some more help regarding this problem...

  • Problems with EJB Application with many CMP's

    I had been working with a Sun engineer in reference to a major problem with a large CMP Module with over 120 cmp beans with many relationships. It was filed as a bug and just recently they released Update 2. I installed this update and now I can build our complete application in Sun 1 Studio 5. Now, there seems to still be a problem in the application module. If we create the application and add our session module, cmp module and web module there is not problems. I can create the ear file and deploy this to S1AS7. Now if I exit the IDE the next time I return the Application shows that there is a error (red error indicator). I cannot expand the application to see the modules. The same problem was in S1S4 but it was not that big of a deal as it allowed you to add the modules back in, save and export the ear. But, now in S1S5 you cannot do a single thing with this module. Even the menu items to delete are not available. The only thing we can to is create the application again and add the modules etc. Also, if I right click on the application and select view error information it does nothing so I cannot even see the error that is causing this trouble. Also, we know it is something with the application and the cmp module because we can add just the web moduel and session module, save, exit and return and the application is fine. It is only when we add in the cmp module that after saving and returning things are wacked.
    Has anyone else seen this problem with the application?

    I have found the error. In the ide log it showed that the entity expansion limit was reached for the DOM parser. I added this to my ide.cfg and all works well now.
    -J-DentityExpansionLimit=128000
    Apperently the limit is 64000 and with the very large modules it was just not enough.

  • Limit number of rows from wildcard expansion- DRG-51030

    We use CONTEXT iindex in 11g to search on a text DB column, "Name".
    This is used in a UI to show autosuggest list of 25 matching names.
    When the end user types an 'a' we want to show a list of the first 25 names that contain an 'a'.
    We hit the issue of too many matches in the wildcard expansion query:
    DRG-51030: wildcard query expansion resulted in too many terms
    This is a frequent use case when the user types just 1 character ('a' will easily match over 50K names in our case).
    Is there a way to make the wildcard expansion query only return the first 25 rows?
    We never show more than 25 names in our UI - so we would like the expansion query to also return max of 25 rows.
    Our query is:
    SELECT ResEO.DISPLAY_NAME,
    ResEO.RESOURCE_ID,
    ResEO.EMAIL
    FROM RESOURCE_VL ResEO
    WHERE CONTAINS (ResEO.DISPLAY_NAME , '%' || :BindName || '%' )>0
    Also,
    Is there a way to use CTXCAT type of index and achieve this (expansion query limit of 25)?
    We are considering switching to CTXCAT index based on documentation that recommends this type of an index for better performance.

    Your best bet may be to look up the words directly in the $I token table.
    If your index is called NAME_INDEX you could do:
    select /* FIRST_ROWS(25) */ token_text from
      (  select token_text
         from dr$name_index$i  
         where token_text like 'A%' )
    where rownum < 26;That should be pretty quick.
    However, if you really want to do %A% - any word which has an A in it - it's not going to be so good, because this will prevent the index being used on the $I table - so it's going to do a full table scan. In this case you really need to think a bit harder about what you're trying to achieve and why. Does it really make any sense to return 25 names which happen to have an A in them? Why not wait until the user has typed a few more characters - 3 perhaps? Or use my technique for one or two letters, then switch over to yours at three characters (or more).
    A couple of notes:
    - Officially, accessing the $I table is not supported, in that it could change in some future version, though it's pretty unlikely.
    - I trust you're using the SUBSTRING_INDEX option if you're doing double truncated searches - a wild card at the beginning and end. If not, your performance is going to be pretty poor.

  • Hitting the 32k size limit with Keyword Expansion in packages

    Hi!
    I am hitting the 32k size limit with Keyword Expansion (KE). It is hardcoded in the procedure jr_keyword.analyze_mlt.
    Are there any plans to get rid of this limit, so package bodies with size > 32000 bytes can be expanded?

    Well, I am making progress. With a combination if utl_tcp.get_line() - to trap the header and utl_tcp.get_text - to get the data at 32K chunks - which I then put into a CLOB or VARRAY of VARCHAR2; I can get as much data as is sent.
    The problem now is speed.
    It takes over 60 seconds to get 160K (5 32K chunks) of data when I put it into the VARRAY of VARCHAR2, and it takes even longer if I use dbms_lob.write() and dbms_lob.writeappend() to store the data.
    Am I doing something wrong? Is there another way?
    Thank You for any Help.
    Shannon

  • PDF file size expansion problem

    A colleague sends me a 4 MB PDF file from his Dell.  I receive it on my Macbook Pro and save it; it's now a 7.6 MB file.  I write a new one line email, and attach the PDF to my email; it now shows up as a 9.8 MB attachment.  When I send; the email size shows up as 10.1 MB file, which blows the Exchange Server limit (or so I believe; the error suggests a file size limit issue, but I've not yet verified the exact file size limit of the Exchange Server service we use).  I expect some modest expansion/differences, but 2.5 MB every time I just look at it doesn't make sense.
    When I duplicate this procedure with my Gateway PC, (same file, email, exchange server etc.) the file size remains constant at 4 MB. 
    I just noticed the Gateway has Adobe Reader, the Macbook Pro does not.  Could the way OSX is treating the PDF be causing this, would Adobe Reader (free or full version) solve this and (3) what technically is happening here?
    Help and understanding is very much appreciated. 
    Using OSX 10.6.7

    Adobe Reader is free. Don't confuse it with Adobe Acrobat. Get it at http://get.adobe.com/reader/

  • Firewire pci card in B&W Expansion slot #1?

    I am using my Blue & White G3 as a server, including a role as backup server to an external firewire hard drive which is connected to a firewire PCI card in expansion slot 3.
    The first expansion slot on the B&W runs at 66mhz, the other three at 33mhz. Since I run the server headless (access through Remote Desktop), I do not need to have the video card in slot one but could put it in another, slower, slot.
    My questions are:
    1.) Can I put the firewire PCI card in slot one and
    2.) Will I get higher throughput?
    Thanks,
    Rich
    G5 Dual 2.0   Mac OS X (10.4.3)  

    FireWire 400 is theoretically capable of 400 MegaBits/sec. At 8 bits per byte, thats a theoretical maximum throughput of 50 MegaBytes a second.
    A 10,000 RPM drive can source a single burst of data off the platters at about 50 MegaBytes a second. Write bursts can be slightly faster, because the data goes in the cache, but you have to write it to the platters eventually.
    The 33 MHz slots are talking multiple bytes at once, so there effective speed should be quite a bit higher. I have seen a rumor here that bugs in the firmware limit the 33 MHz slots to a mere 53 MegaBytes/ second.
    My assessment is: everything is topping out around 50 MegaBytes a second. Unless you are using a 15,000 RPM drive in the FireWire enclosure, you do not need to do anything extraordinary. Connect it in the usual way and it should do fine. The extra theoretical speed from the 66 MHz slot is unlikely to be seen in daily use.

  • How to limit the number of search results returned by oracle text

    Hello All,
    I am running an oracle text search which returned the following error to my java program.
    ORA-20000: Oracle Text error:
    DRG-51030: wildcard query expansion resulted in too many terms
    #### ORA-29902: Fehler bei der Ausführung von Routine ODCIIndexStart()
    ORA-20000: Oracle Text error:
    DRG-51030: wildcard query expansion resulted in too many terms
    java.sql.SQLException: ORA-29902: Fehler bei der Ausführung von Routine ODCIIndexStart()
    ORA-20000: Oracle Text error:
    DRG-51030: wildcard query expansion resulted in too many terms
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:169)
    When i looked up the net, one suggestion that was given is to narrow the wildcard query, which i cannot in my search page. Hence I am left with the only alternative of limiting the number of results returned by oracle text search query.
    Please let me know how to limit the number of search results returned by oracle text so that this error can be avoided.
    Thanks in advance
    krk

    Hi,
    If not set explicitly, the default value for WILDCARD_MAXTERMS is 5000. This is set as a wordlist preference. This means that if your wildcard query matches more than 5000 terms the message is returned. Exceeding that would give the user a lot to sift through.
    My suggestion: trap the error and return a meaningful message to the user indicating that the search needs to be refined. Although it is possible to increase the number of terms before hitting maxterms (increase wildcard_maxterms preference value for your wordlist), 5000 records is generally too much for a user to deal with anyway. The search is not a good one since it is not restricting rows adequately. If it happens frequently, get the query log and see the terms that are being searched that generate this message. It may be necessary to add one or more words to a stoplist if they are too generic.
    Example: The word mortgage might be a great search term for a local business directory. It might be a terrible search term for a national directory of mortgage lenders though (since 99% of them have the term in their name). In the case of the national directory, that term would be a candidate for inclusion in the stoplist.
    Also remember that full terms do not need a wildcard. Search for "car %" is not necessary and will give you the error you mentioned. A search for "car" will yield the results you need even if it is only part of a bigger sentence because everything is based on the token. You may already know all of this, but without an example query I figured I'd mention it to be sure.
    As for limiting the results - the best way to do that is to allow the user to limit the results through their query. This will ensure accurate and more meaningful results for them.
    -Ron

  • Error - DRG-51030: wildcard query expansion resulted in too many terms

    Hi All,
    My searches against a 100 million company names table on org names often result in the following error:
    DRG-51030: wildcard query expansion resulted in too many terms
    A sample query would be:
    select v.* --xref.external_ref_party_id,v.*
    from xxx_org_search_x_v vwhere 1 =1
    and state_province = 'PA'
    and country = 'US'
    and city = 'BRYN MAWR'
    and catsearch(org_name,'BRYN MAWR AUTO*','CITY=''BRYN MAWR''' ) > 0
    I understand that is caused by the presence of the word Auto to which we append a * . (If i remove the auto the search works fine).
    My question is - is there a way to limit the query expansion to only , say 100, results that get returned from the index?

    Thanks for the reply. This is how the preferences are set:
    exec ctx_ddl.create_preference('STEM_FUZZY_PREF', 'BASIC_WORDLIST');
    exec ctx_ddl.set_attribute('STEM_FUZZY_PREF','FUZZY_MATCH','AUTO');
    exec ctx_ddl.set_attribute('STEM_FUZZY_PREF','FUZZY_SCORE','60');
    exec ctx_ddl.set_attribute('STEM_FUZZY_PREF','FUZZY_NUMRESULTS','100');
    exec ctx_ddl.set_attribute('STEM_FUZZY_PREF','STEMMER','AUTO');
    exec ctx_ddl.set_attribute('STEM_FUZZY_PREF', 'wildcard_maxterms',15000) ;
    exec ctx_ddl.create_preference('LEXTER_PREF', 'BASIC_LEXER');
    exec ctx_ddl.set_attribute('LEXTER_PREF','index_stems', 'ENGLISH');
    exec ctx_ddl.set_attribute('LEXTER_PREF','skipjoins',',''."+-/&');
    exec ctx_ddl.create_preference('xxx_EXT_REF_SEARCH_CTX_PREF', 'BASIC_STORAGE');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'I_TABLE_CLAUSE','tablespace ICV_TS_CTX_IDX');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'K_TABLE_CLAUSE','tablespace ICV_TS_CTX_IDX');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'N_TABLE_CLAUSE','tablespace ICV_TS_CTX_IDX');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'I_INDEX_CLAUSE','tablespace ICV_TS_CTX_IDX Compress 2');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'P_TABLE_CLAUSE','tablespace ICV_TS_CTX_IDX ');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'I_ROWID_INDEX_CLAUSE','tablespace ICV_TS_CTX_IDX ');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'R_TABLE_CLAUSE','tablespace ICV_TS_CTX_IDX LOB(DATA) STORE AS (CACHE) ');
    exec ctx_ddl.create_index_set('xxx_m_iset');
    exec ctx_ddl.add_index('xxx_m_iset','city, country');
    exec ctx_ddl.add_index('xxx_m_iset','postal_code, country');
    Users will always use city or postal code when searching for a name. When I run this query -
    SELECT dr$token
    FROM DR$XXX_EXT_REF_SEARCH_CTX_I1$I
    where dr$token like 'AUTO%'
    ORDER BY dr$token desc
    i get more than 1M rows.
    is there a way to include and search for the city name along with the org name?
    Thanks again..

  • Basics:  Best practise when using a thesaurus?

    Hi all,
    I currently use a function which returns info for a search on our website, the function is used by the java code to return hits:
    CREATE OR REPLACE FUNCTION fn_product_search(v_search_string IN VARCHAR2)
    RETURN TYPES.ref_cursor
    AS
    wildcard_search_string VARCHAR2(100);
    search_results TYPES.ref_cursor;
    BEGIN
    OPEN search_results FOR
    SELECT
              DCS_PRODUCT.product_id,
              DCS_CATEGORY.category_id,
              hazardous,
              direct_delivery,
              standard_delivery,
              DCS_CATEGORY.short_name,
              priority
              FROM
              DCS_CATEGORY,
              DCS_PRODUCT,
              SCS_CAT_CHLDPRD
              WHERE
              NOT DCS_PRODUCT.display_on_web = 'HIDE'
              AND ( contains(DCS_PRODUCT.search_terms, v_search_string, 0) > 0)
              AND SCS_CAT_CHLDPRD.child_prd_id = DCS_PRODUCT.product_id
              AND DCS_CATEGORY.category_id = SCS_CAT_CHLDPRD.category_id
              ORDER BY SCORE(0) DESC,
              SCS_CAT_CHLDPRD.priority DESC,
              DCS_PRODUCT.display_name;
    RETURN search_results;
    END;
    I want to develop this function so that is will use a thesaurus in case of no data found.
    I have been trying to find any documentation that might discuss 'best practise' for this type of query.
    I am not sure if I should just include the SYN call in this code directly or whether the use of the thesaurus should be restricted so that it is only used in circumstances where the existing fuction does not return a hit against the search.
    I want to keep overheads and respose times to an absolute minimum.
    Does anyone know the best logic to use for this?

    Hi.
    You want so much ("... absolute minimum for responce time...") from OracleText on 9.2.x.x.
    First, text queries on 9.2 is so slowly than on 10.x . Second - this is bad idea - trying to call query expansion functions directly from application.
    My own expirience:
    The best practise with thesauri usage is:
    1. Write a good searcg string parser which add thes expansion function (like NT,BT,RT,SYN...) directly in result string passed through to DRG engine.
    2. Use effective text queries: do not use direct or indirect sorts (hint DOMAIN_INDEX_NO_SORT can help).
    3. Finally - write effective application code. Code you show is inefficient.
    Hope this helps.
    WBR Yuri

Maybe you are looking for

  • Using concat Function in Mapping

    Hello, I have a mapping where I'm using the standart concat function to concatenate the content of two queues. Every queue has normally only one entry. When both queues have one entry, it is working without any issues. But now my problem: First queue

  • Batch deletion flag - Purchase order

    Dear all, even if I set in batch master the deletion flag at a plant level (MCHA-LVORM) I can create a Purchase order. No message are displayed (warning or error). Only when I post the good receipt, an error message occurs (M7 144). How can I prevent

  • PM Order integration with Controlling

    Dear All, Can anybody say detailed How to integrate  PM Order with Controlling? What is the T-code and How to see the effect?? Thanks and Regards Rakesh

  • Error while settling costs to AUC (T.CODE CJ88)

    The following error is encountered while settling costs to AUC using t.code CJ88 with WBS element, with parameters with orders Selection  processing type 3 (partial settlement), period 4, FY 2008, selection - with hierarchy, with orders. Error messag

  • How much RAM can PS CS4e use in Mac OS 10.5.x?

    I've tried to track this down from a number of places, but can't get an answer. I'm assuming the same as CS3e, which is a little over 3 GB. I'm getting ready to build a new MacPro and I need to decide how much RAM to add initially. I'm a Mac tech, so