Invoice is exceeded po limit

Hi Guru
I have one issue where the inovices values are higher than the Po values. my client want to resrrict system not to post the invoice value by exceeding PO value
example PO values is 10000 and invocie values are posted 10500
any valueble suggestiongs
kind regards
Sunitha

Hi,
If you want only the invoice to be blocked for payment then maintain the tolerance limits.
Goto Customzing SPRO->MM->LIV-> Invoice Block-> Set Tolerance Limits
If you need an error then:-
Goto Customzing SPRO->MM->LIV-> Define Attributes of System Messages
In that maintain the message as error:-
- M8-082 "Price too high (tolerance limit of exceeded)"
- M8-083 "Price too high (tolerance limit of percentage-sign exceeded)"
- M8-084 "Price too low (below tolerance limit of (variable should be entered here - variable1))"
- M8-085 "Price too low (below tolerance limit of percentage-sign)"
Regards,
Gaurav

Similar Messages

  • Email Alert in Credit Management when Invoice Desptact Exceeds Partic Limit

    Dear All,
    I need an alert email from SAP system forthe customer credit based on dispatch exceed a particular amount
    The email should be marked to partilcular person who has been configured in SAP.
    Please could you let me know the neccessarry config and customisation needed for the same.
    I have set the Credit Limit for the Customer already in FD32. Please let me know the Workflow Transactions and where the Amount Needs to be sit.
    Regards,
    Parag

    Hi Parag
    Maintain KRML Output type with  Transmission Medium Email. Also maintain Credit representatives, Credit Mgr and also maintain condition record for KRML Output type at order level only
    Thanks and Regards
    Srinath

  • Customer Has Exceeded Credit Limit Question

    Hi
    At the moment when we raise an order for a customer with a credit limit it appears a warning message saying that the customer has exceeded his credit limit.
    This information is a bit misleading if the customer has yes exceeded his limit but only because the outstanding invoices are not due yet ie the Due Date may be 31st May 2010.
    Is there a way to combine the two information and inform the SAP user when the payments terms for invoices have been exceeded, not just the credit limit?
    Thank you.
    MB

    Hi Matthew ,
    Both the thing are different :
    1. Credit Limit is only for the Alert or you can block the further transaction .
    2. Next you are talking about the Aging report for the outstanding payments .
    So both having different aspects , you can not directly combine the both. But if you want then you can use some reporting tools . 
    Thanks
    Ashish

  • Want pull the PO's for those PO value exceeds certain limit.

    HI SAP Gurus,
    I Want pull the PO's report for those PO value exceeds certain limit ( Example I want to pull the PO's which value is more that 5 million USD for period of 1 year)
    Thanks and Regards,
    SHARAN.

    hi
    Try the table EKBE with fields WRBTR( invoice document amount) , DMBTR (Line item amount in the local currency for GRn Quantity) EBELN- (Purchase order). This will give you the details. Do check a test case with tolerance at Invoice level and post additional amount for Invoice and check the Table field updations.
    Also ,
    Try it in Tcode SE16
    table Name EKBE.
    Or,
    May be use can use standard report ME2N, with selection parameter: GUTSCHRIFT (Invoice exist), it will give you all the list of PO with invoice. Here you can check the conditions as you need.
    I hope it helps.
    Best Regards,
    Rahul.

  • Cookie - Bad Request - Size of a request header field exceeds server limit -

    We are on cq5.5. We see this error intermittently. What is the best way to fix this? Cookie size seems to be adding to the issue.
    Bad Request
    Your browser sent a request that this server could not understand.
    Size of a request header field exceeds server limit.
    Cookie: cq-mrss=path%3D%252Fcontent%252Fdam%26p.limit%3D-1%26mainasset%3Dtrue%26type%3Ddam%3AAsse t; __unam=acfbce4-13b8ffd6084-6070cfe6-4; __utma=16528299.1850197993.1355330446.1361568697.1362109625.3; __utmz=16528299.1355330446.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); REM_ME=1004; SessionPersistence-author-lx_qa_author2=CLIENTCONTEXT%3A%3DvisitorId%3Danonymous%2Cvisito rId_xss%3Danonymous%7CPROFILEDATA%3A%3DauthorizableId%3Danonymous%2CformattedName%3DAnonym ous%20Surfer%2Cpath%3D%2Fhome%2Fusers%2Fa%2Fanonymous%2Cavatar%3D%2Fetc%2Fdesigns%2Fdefaul t%2Fimages%2Fcollab%2Favatar.png%2Cage%3D%2Cage_xss%3D%7CTAGCLOUD%3A%3Dtopic%3Aworkflow%3D 14%2Cindustry%3Aprocess_management%3D2%2Ctopic%3Aprocess_mining%3D3%2Ctopic%3Aprocess_docu mentation%3D1%2Ctopic%3Aintelligent_capture%3D5%2Cindustry%3Acapture%3D5%2Ctopic%3Adocumen t_imaging%3D2%2Ctopic%3Adistributed_intelligent_capture%3D2%2Ctopic%3Adocument_output_mana gement%3D4%2Cindustry%3Acontent_management%3D14%2Cindustry%3Asoftware_solutions_hardware%3 D4%2Cindustry%3Adevice_management%3D2%2Ctopic%3Ahelp_desk_services%3D2%2Cindustry%3Aintera ct%3D15%2Ctopic%3Asecure_content_monitor%3D2%2Ctopic%3Aelectronic_forms%3D2%2Ctopic%3Ainte lligent_forms%3D2%2Ctopic%3Adocument_accounting%3D2%2Ctopic%3Aerp_output_management%3D2%2C topic%3Aprint_release%3D2%2Cindustry%3Aoutput_management%3D4%2Ctopic%3Aerp_printing%3D4%2C topic%3Aenterprise_search%3D4%2Ctopic%3Amicrosoft_sharepoint%3D6%2Ctopic%3Adocument_filter s%3D4%2Cindustry%3Asearch%3D4%2Ctopic%3Ahuman_services_case_management%3D2%2Cindustry%3Aca se_management%3D2%2Cindustry%3Aimprove_business_processes%3D6%2Ctopic%3Abusiness_process_m odeling%3D1%2Ctopic%3Alawson%3D1%2Ctopic%3Aapplication_integration%3D8%2Cindustry%3Asoluti on%3D4%2Ctopic%3Amicrosoft_dynamics_crm%3D2%2Cindustry%3Ahealthcare%3D13%2Cindustry%3Areta il%3D8%2Cindustry%3Abanking%3D3%2Cindustry%3Aincrease_efficiency%3D7%2Cindustry%3Agovernme nt%3D8%2Ctopic%3Amicrosoft_outlook%3D2%2Ctopic%3Aesri%3D2%2Ctopic%3Ajd_edwards%3D2%2Ctopic %3Asap%3D1%2Cindustry%3Adrive_business_growth%3D1%2Cindustry%3Abusiness_challenges%3D6%2Ci ndustry%3Aconnect_distributed_workforce%3D1%2Ctype%3Alanding_page%3D2%2Ctopic%3Aconsulting _services%3D2%2Ctopic%3Aretail_pharmacy%3D2%2Cindustry%3Aindustry_solutions%3D5%2Ctopic%3A health_information_management%3D3%2Ctopic%3Apatient_scheduling%3D3%2Ctopic%3Aclinical_depa rtment_solutions%3D3%2Ctopic%3Aclinical_hit_integration%3D3%2Ctopic%3Apatient_admissions_r egistration%3D3%2Ctopic%3Ahealthcare_forms_management%3D3%2Ctopic%3Apatient_access%3D3%2Ct opic%3Aenterprise_print_management_software%3D2%2Ctopic%3Aprint_queue_management%3D2%2Ctop ic%3Aadvanced_print_management%3D2%2Ctopic%3Aemployee_onboarding%3D3%2Ctopic%3Ahuman_resou rces%3D1%2Cindustry%3Ahuman_resources%3D3%2Ctopic%3Aemployee_recruitment%3D1%2Cindustry%3A manufacturing%3D2%2Ctopic%3Aplatform_integration%3D1%2Ctopic%3Awealth_management%3D2%2Cind ustry%3Afinancial_services%3D2%2Ctopic%3Aaccount_opening%3D2%2Ctopic%3Acompliance%3D1%2Cin dustry%3Acompliance%3D1%2Ctopic%3Abusiness_operations_solutions_for_banking%3D2%2Ctopic%3A retail_delivery%3D1%2Ctopic%3Aloan_processing%3D1%2Ctopic%3Aon_demand_negotiable_documents %3D1%2Ctopic%3Anew_account_openings%3D1%2Ctopic%3Aon_demand_forms_customer_communications% 3D1%2Cindustry%3Ainsurance%3D1%2Ctopic%3Amicr_printing%3D1%2Ctopic%3Abank_branch_capture%3 D1%2Ctopic%3Aagency_capture%3D1%7C; ys-cq-damadmin-tree=o%3Awidth%3Dn%253A240%5EselectedPath%3Ds%253A/content/dam; ys-cq-damadmin-grid-assets=o%3Acolumns%3Da%253Ao%25253Aid%25253Ds%2525253Anumberer%25255E width%25253Dn%2525253A23%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253At humbnail%25255Ewidth%25253Dn%2525253A45%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25 253Ds%2525253Atitle%25255Ewidth%25253Dn%2525253A78%25255Ehidden%25253Db%2525253A1%25255Eso rtable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Aname%25255Ewidth%25253Dn%2525253A3 37%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Apublished%25255Ewidth%2 5253Dn%2525253A37%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Amodified %25255Ewidth%25253Dn%2525253A78%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%25 25253Ascene7Status%25255Ewidth%25253Dn%2525253A78%25255Ehidden%25253Db%2525253A1%25255Esor table%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Astatus%25255Ewidth%25253Dn%2525253A 71%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Dn%2525253A8%25255Ewidth%25253Dn%2 525253A78%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Aworkflow%25255Ew idth%25253Dn%2525253A78%25255Ehidden%25253Db%2525253A1%25255Esortable%25253Db%2525253A1%25 5Eo%25253Aid%25253Ds%2525253Awidth%25255Ewidth%25253Dn%2525253A37%25255Esortable%25253Db%2 525253A1%255Eo%25253Aid%25253Ds%2525253Aheight%25255Ewidth%25253Dn%2525253A37%25255Esortab le%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Asize%25255Ewidth%25253Dn%2525253A37%25 255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Areferences%25255Ewidth%25253 Dn%2525253A199%25255Esortable%25253Db%2525253A1%5Esort%3Do%253Afield%253Ds%25253Alabel%255 Edirection%253Ds%25253AASC; amlbcookie=04; ObLK=0x82abacf3a5e3b1e2|0x1cf34305ac210c7e9b2b07e3725392e2; iPlanetDirectoryPro=AQIC5wM2LY4Sfcw0UQ2MST5NlqDAsUi2dscer0wO7VMy9pE.*AAJTSQACMDYAAlMxAAIw NA..*; renderid=rend01; login-token=c9c0d027-c5f9-4e5a-9a90-09d1cf21cfd2%3a0279e369-1689-433c-80ef-d8411040efe5_6 15c2fd1eba8fd42%3acrx.default; ys-cq-siteadmin-grid-pages=o%3Acolumns%3Da%253Ao%25253Aid%25253Ds%2525253Anumberer%25255E width%25253Dn%2525253A23%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253At humbnail%25255Ewidth%25253Dn%2525253A50%25255Ehidden%25253Db%2525253A1%25255Esortable%2525 3Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Atitle%25255Ewidth%25253Dn%2525253A386%25255Es ortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Aname%25255Ewidth%25253Dn%2525253A 148%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Apublished%25255Ewidth% 25253Dn%2525253A25%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Amodifie d%25255Ewidth%25253Dn%2525253A86%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2 525253Ascene7Status%25255Ewidth%25253Dn%2525253A86%25255Ehidden%25253Db%2525253A1%25255Eso rtable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Astatus%25255Ewidth%25253Dn%2525253 A76%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Aimpressions%25255Ewidt h%25253Dn%2525253A86%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Atempl ate%25255Ewidth%25253Dn%2525253A86%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds %2525253Aworkflow%25255Ewidth%25253Dn%2525253A86%25255Ehidden%25253Db%2525253A1%25255Esort able%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2525253Alocked%25255Ewidth%25253Dn%2525253A8 6%25255Ehidden%25253Db%2525253A1%25255Esortable%25253Db%2525253A1%255Eo%25253Aid%25253Ds%2 525253AliveCopyStatus%25255Ewidth%25253Dn%2525253A86%25255Ehidden%25253Db%2525253A1%25255E sortable%25253Db%2525253A1%5Esort%3Do%253Afield%253Ds%25253Atitle%255Edirection%253Ds%2525 3AASC; ys-cq-siteadmin-tree=o%3Awidth%3Dn%253A306%5EselectedPath%3Ds%253A/content/homesite/en-US /insights/video_unum-group-accelerates-workflows-with-solutions-; ys-cq-cf-clipboard=o%3Acollapsed%3Db%253A1; ys-cq-cf-tabpanel=o%3AactiveTab%3Ds%253AcfTab-Images-QueryBox; JSESSIONID=ad311ac3-7c24-4e62-ae8a-0ebacd8e8188; SessionPersistence-author-lx_qa_author1=CLIENTCONTEXT%3A%3DvisitorId%3Danonymous%2Cvisito rId_xss%3Danonymous%7CPROFILEDATA%3A%3DauthorizableId%3Danonymous%2CformattedName%3DAnonym ous%20Surfer%2Cpath%3D%2Fhome%2Fusers%2Fa%2Fanonymous%2Cavatar%3D%2Fetc%2Fdesigns%2Fdefaul t%2Fimages%2Fcollab%2Favatar.png%2Cage%3D%2Cage_xss%3D%7CGEOLOCATION%3A%3D%7CTAGCLOUD%3A%3 Dindustry%3Aconnect_distributed_workforce%3D1%2Cindustry%3Abusiness_challenges%3D1%2Cindus try%3Acontent_management%3D1%2Cindustry%3Ahealthcare%3D1%2Ctopic%3Afinance%3D1%2Ctopic%3Ap rocurement_processing%3D1%2Cindustry%3Afinancial_services%3D2%2Cindustry%3Ainsurance%3D2%2 Cindustry%3Aindustry_solutions%3D2%2Ctopic%3Aagency_capture%3D2%7C; s_cc=true; s_sq=lxmtest%3D%2526pid%253Dinsights%25253Avideo_unum-group-accelerates-workflows-with-so luti

    Hi EbodaWill,
    File daycare for fp 2324 where in you can configure & allow you to increase the request header size and avoid the bad request error OR for a package that improves client side persistence & does not use cookies.
    Thanks,
    Sham

  • HT4863 I have an error message coming up when trying to send an email which says 'sending the message failed because you're exceeding the limit' can anyone help me to resolve this please

    I have an error message coming up when trying to send an email which says 'sending the message failed because you're exceeding the limit' can anyone help me to resolve this please

    Try reentering the password in your iCloud mail settings.

  • When I try to go to certain pages on PayPal, I get an error message which says 400 Bad Request: Size of request exceeds server limit. Didn't do it before the lasted Foxfire update. Can I revert to an older version?

    1. Using Windows XP home
    2.PayPal is the only website I have problems with Foxfire; works fine on IE8
    3. I can get to some of the desired pages through other links on PayPal so it's not the pages themselves
    4. The error message contains this:
    Cookie: KHcl0EuY7AKSMgfvHl7J5E7hPtK=STMZXVPEjOuzek-HGHcBJRjmRXIQgXpML8uy9fV13oPeEYcAUnBOhKFzvNJqcb-rsc2S6nlYOklSdX6P; cookie_check=yes; LANG=en_US%3bUS; INSIDE
    When I saw Cookie, I did a disk clean including temp files and cookies. No difference. I also did a System Restore, to no avail.
    5. The 400 message actually reads: Size of a request header field exceeds server limit.
    6. Talk to PayPal customer service and they have not heard about any Foxfire conflicts.
    7. Windows Firewall and Kaspersky Firewall turned ON but this is the same configuration before the Foxfire software update. PS. I'll give up Foxfire before I give up Kaspersky

    This issue can be caused by corrupted cookies.
    Clear the cache and the cookies from sites that cause problems.
    * "Clear the Cache": Tools > Options > Advanced > Network > Offline Storage (Cache): "Clear Now"
    * "Remove the Cookies" from sites causing problems: Tools > Options > Privacy > Cookies: "Show Cookies"

  • I used the iTunes match workaround and now want to start over, but I am being told I have exceeded the limit even though I completely cleared the match cloud, mindful of the 1000-delete-at-a-time limit....what should I do?

    I have over 25k songs in iTunes. I originally tried the "create a second library" solution. I reduced to less than 25k songs and successfully ran iTunes match. Then I discovered the limitations of the second library solution regarding adding more music, managing playlists, and turning back on my main library and being greeted with all sorts of problems.
    So, I created a blank library, ran iTunes match, and its showed me all of my files in the cloud that had been matched or uploaded. I deleted all of the files, mindful of the 1000 file limit per delete. I have confirmed I have now deleted them all. An iTunes match update says I have no files in the cloud. Perfect.
    I reduced my library again, this time using the superior method of just changing the files I do not want to match to voice memos. That is done. I have less than 25 k songs.
    I waited 2 days and ran match again. Yikes. It did not let me match or upload anything, and every music files's cloud status just says "exceeded limit," despite the fact that my cloud is devoid of music and I have no access to any music via match.
    I am in a limbo where I have nothing in the cloud from match, but I can't add anything because match says I exceeded my limit.
    I emailed customer support, and got back a useless email linking me to a very general article about iTunes match. I called customer service and they said this was something the iTunes store people would need to address, and they can only be contacted by email. Sigh . . . .
    Any thoughts?
    Thanks

    Have you confirmed that you successfull purged iTunes Match by also looking on an iOS device?  If so, keep in mind that Apple's servers may be experiencing a heavy load right now.  They just added about 19 countries to the service and I've read a few accounts this morning that suggests all's not running perfectly right now.

  • I can no longer access Bejeweled Blitz through my facebook account.  I get the message that says, "your browser sent a request that this server could not understand. Size of a request header field exceeds server limit".  Help please.

    I can no longer access Bejeweled Blitz through facebook.  I get the message, "your browser sent a request that this server could not understand. Size of a request header field exceeds server limit". I can access Bejeweled through FB using my husband's log in so to me that suggests the problem is with my log in. Help please.

    Contact FB or use another browser. 

  • Cannot retrieve my e-mail "browser sent request server could not understand. Size of request header field exceeds server limit"

    Upgraded to Firefox 5.0.1 yesterday. No, after logging on to firefox, which takes me to my comcast page and when I try to get
    my e-mail I get this message "your browser sent a request this server could not understand. Size of request header field exceeds server limit" Then it says something about "cookies" I also tried to connect to other sites and get similar messages. Just to let you know I am not a guru, and 80 years old, but I did not have this problem with the previous version. Question, why are the headers repeated? Could that be the problem???

    This issue can be caused by corrupted cookies.
    Clear the cache and the cookies from sites (e.g. comcast) that cause problems.
    "Clear the Cache":
    * Firefox > Preferences > Advanced > Network > Offline Storage (Cache): "Clear Now"
    "Remove Cookies" from sites causing problems:
    * Firefox > Preferences > Privacy > Cookies: "Show Cookies"

  • Maximum number of allowed pages in Pivot Table exceeded Configured Limit:

    When trying to create Calculated Item on Pivot table Page section Item like Division dim the calculated item is 'All' -> sum * and Customer again the calculated item is 'All -> sum * . it performs the first calculation (for division) but in the next one it gives the error
    "Maximum number of allowed pages in Pivot Table exceeded (Configured Limit: 1000)."

    Try limiting the resultset or upgrade the [MaxVisiblePages]
    see: http://obiee101.blogspot.com/2008/02/obiee-controling-pivot-view-behavior.html
    regards
    John
    http://obiee101.blogspot.com

  • P6 Professional R8.3 - (Cannot move selected items because the result will exceed the limit of WBS tree maximum levels)

    Hi,
    Has anyone encountered this error "Cannot move selected items because the result will exceed the limit of WBS tree maximum levels"?
    Please advise if there's any way around / solution to this. http://s8.postimg.org/bj900spcl/1_error.jpg
    Thanks.

    Thank you so much MichaelRidino!

  • ORA-02393 Exceeded Call Limit on CPU Usage

    I have created a Profile and attached it to a user, in this example:
    Create Profile percall
    Limit
    CPU_PER_CALL 10
    IDLE_TIME 5;
    I have attached it to one user - USER1
    When USER1 runs a SQL Statement -
    SELECT COUNT(*) FROM TABLE1 A WHERE A.EFFDT = (SELECT MAX(B.EFFDT) WHERE B.EMPLID = A.EMPLID AND B.EFFDT <= SYSDATE);
    I get an error (Which I want to receive) ORA-02393 Exceeded Call Limit on CPU Usage.
    The SQL statement shows in the table DBA_COMMON_AUDIT_TRAIL, but shows a success even though the user received an error ORA-02393.
    What I want is a way for a DBA to be able to report on those ORA-02393 errors. I don't see any entries in the Log files, and don't notice any errors in the Oracle Tables.
    I would like to be able to show the user (after a week when they bring up the issue) what the SQL statement was and why it Exceeded the CPU Usage. If the error could place the SQL statement in a table or just display it in an error log with the Statement to verify that THIS is the statement which exceeded the CPU Usage.
    Thank you
    Aaron

    can you modify the procedure in which the SELECT resides.
    If so, trap & log the error.

  • Java.lang.OutOfMemoryError: Requested array size exceeds VM limit

    Hi!
    I've a this problem and I do not know how to reselve it:
    I' ve an oracle 11gr2 database in which I installed the Italian network
    when I try to execute a Shortest Path algorithm or a shortestPathAStar algorithm in a java program I got this error.
    [ConfigManager::loadConfig, INFO] Load config from specified inputstream.
    [oracle.spatial.network.NetworkMetadataImpl, DEBUG] History metadata not found for ROUTING.ITALIA_SPAZIO
    [LODNetworkAdaptorSDO::readMaximumLinkLevel, DEBUG] Query String: SELECT MAX(LINK_LEVEL) FROM ROUTING.ITALIA_SPAZIO_LINK$ WHERE LINK_LEVEL > -1
    *****Begin: Shortest Path with Multiple Link Levels
    *****Shortest Path Using Dijkstra
    [oracle.spatial.network.lod.LabelSettingAlgorithm, DEBUG] User data categories:
    [LODNetworkAdaptorSDO::isNetworkPartitioned, DEBUG] Query String: SELECT p.PARTITION_ID FROM ROUTING.ITA_SPAZIO_P_TABLE p WHERE p.LINK_LEVEL = ? AND ROWNUM = 1 [1]
    [QueryUtility::prepareIDListStatement, DEBUG] Query String: SELECT NODE_ID, PARTITION_ID FROM ROUTING.ITA_SPAZIO_P_TABLE p WHERE p.NODE_ID IN ( SELECT column_value FROM table(:varray) ) AND LINK_LEVEL = ?
    [oracle.spatial.network.lod.util.QueryUtility, FINEST] ID Array: [2195814]
    [LODNetworkAdaptorSDO::readNodePartitionIds, DEBUG] Query linkLevel = 1
    [NetworkIOImpl::readLogicalPartition, DEBUG] Read partition from blob table: partition 1181, level 1
    [LODNetworkAdaptorSDO::readPartitionBlobEntry, DEBUG] Query String: SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM ROUTING.ITA_SPAZIO_P_BLOBS_TABLE WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [1181,1]
    [oracle.spatial.network.lod.LabelSettingAlgorithm, WARN] Requested array size exceeds VM limit
    [NetworkIOImpl::readLogicalPartition, DEBUG] Read partition from blob table: partition 1181, level 1
    [LODNetworkAdaptorSDO::readPartitionBlobEntry, DEBUG] Query String: SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM ROUTING.ITA_SPAZIO_P_BLOBS_TABLE WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [1181,1]
    Exception in thread "main" java.lang.OutOfMemoryError: Requested array size exceeds VM limit
    I use the sdoapi.jar, sdomn.jar and sdoutl.jar stored in the jlib directory of the oracle installation path.
    When I performe this query : SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM ROUTING.ITA_SPAZIO_P_BLOBS_TABLE WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [1181,1]
    I got the following result
    BLOB NUM_INODES NUM_ENODES NUM_ILINKS NUM_ELINKS NUM_INLINKS NUM_OUTLINKS USER_DATA_INCLUDED
    (BLOB) 3408 116 3733 136 130 128 N
    then the java code I use is :
    package it.sistematica.oracle.spatial;
    import it.sistematica.oracle.network.data.Constant;
    import java.io.InputStream;
    import java.sql.Connection;
    import oracle.spatial.network.lod.DynamicLinkLevelSelector;
    import oracle.spatial.network.lod.GeodeticCostFunction;
    import oracle.spatial.network.lod.HeuristicCostFunction;
    import oracle.spatial.network.lod.LODNetworkManager;
    import oracle.spatial.network.lod.LinkLevelSelector;
    import oracle.spatial.network.lod.LogicalSubPath;
    import oracle.spatial.network.lod.NetworkAnalyst;
    import oracle.spatial.network.lod.NetworkIO;
    import oracle.spatial.network.lod.PointOnNet;
    import oracle.spatial.network.lod.config.LODConfig;
    import oracle.spatial.network.lod.util.PrintUtility;
    import oracle.spatial.util.Logger;
    public class SpWithMultiLinkLevel
         private static NetworkAnalyst analyst;
         private static NetworkIO networkIO;
         private static void setLogLevel(String logLevel)
         if("FATAL".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_FATAL);
         else if("ERROR".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_ERROR);
         else if("WARN".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_WARN);
         else if("INFO".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_INFO);
         else if("DEBUG".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_DEBUG);
         else if("FINEST".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_FINEST);
         else //default: set to ERROR
         Logger.setGlobalLevel(Logger.LEVEL_ERROR);
         public static void main(String[] args) throws Exception
              String configXmlFile =                "LODConfigs.xml";
              String logLevel =           "FINEST";
              String dbUrl =                Constant.PARAM_DB_URL;
              String dbUser =                Constant.PARAM_DB_USER;
              String dbPassword =                Constant.PARAM_DB_PASS;
              String networkName =                Constant.PARAM_NETWORK_NAME;
              long startNodeId = 2195814;
              long endNodeId = 3415235;
         int linkLevel = 1;
         double costThreshold = 1550;
         int numHighLevelNeighbors = 8;
         double costMultiplier = 1.5;
         Connection conn = null;
         //get input parameters
         for(int i=0; i<args.length; i++)
         if(args.equalsIgnoreCase("-dbUrl"))
         dbUrl = args[i+1];
         else if(args[i].equalsIgnoreCase("-dbUser"))
         dbUser = args[i+1];
         else if(args[i].equalsIgnoreCase("-dbPassword"))
         dbPassword = args[i+1];
         else if(args[i].equalsIgnoreCase("-networkName") && args[i+1]!=null)
         networkName = args[i+1].toUpperCase();
         else if(args[i].equalsIgnoreCase("-linkLevel"))
         linkLevel = Integer.parseInt(args[i+1]);
         else if(args[i].equalsIgnoreCase("-configXmlFile"))
         configXmlFile = args[i+1];
         else if(args[i].equalsIgnoreCase("-logLevel"))
         logLevel = args[i+1];
         // opening connection
         System.out.println("Connecting to ......... " + Constant.PARAM_DB_URL);
         conn = LODNetworkManager.getConnection(dbUrl, dbUser, dbPassword);
         System.out.println("Network analysis for "+networkName);
         setLogLevel(logLevel);
         //load user specified LOD configuration (optional),
         //otherwise default configuration will be used
         InputStream config = (new Network()).readConfig(configXmlFile);
         LODNetworkManager.getConfigManager().loadConfig(config);
         LODConfig c = LODNetworkManager.getConfigManager().getConfig(networkName);
         //get network input/output object
         networkIO = LODNetworkManager.getCachedNetworkIO(
         conn, networkName, networkName, null);
         //get network analyst
         analyst = LODNetworkManager.getNetworkAnalyst(networkIO);
         double[] costThresholds = {costThreshold};
         LogicalSubPath subPath = null;
         try
              System.out.println("*****Begin: Shortest Path with Multiple Link Levels");
              System.out.println("*****Shortest Path Using Dijkstra");
              String algorithm = "DIJKSTRA";
              linkLevel = 1;
              costThreshold = 5000;
              subPath = analyst.shortestPathDijkstra(new PointOnNet(startNodeId), new PointOnNet(endNodeId),linkLevel, null);
              PrintUtility.print(System.out, subPath, true, 10000, 0);
              System.out.println("*****End: Shortest path using Dijkstra");
              catch (Exception e)
              e.printStackTrace();
              try
              System.out.println("*****Shortest Path using Astar");
              HeuristicCostFunction costFunction = new GeodeticCostFunction(0,-1, 0, -2);
              LinkLevelSelector lls = new DynamicLinkLevelSelector(analyst, linkLevel, costFunction, costThresholds, numHighLevelNeighbors, costMultiplier, null);
              subPath = analyst.shortestPathAStar(
              new PointOnNet(startNodeId), new PointOnNet(endNodeId), null, costFunction, lls);
              PrintUtility.print(System.out, subPath, true, 10000, 0);
              System.out.println("*****End: Shortest Path Using Astar");
              System.out.println("*****End: Shortest Path with Multiple Link Levels");
              catch (Exception e)
              e.printStackTrace();
         if(conn!=null)
         try{conn.close();} catch(Exception ignore){}
    At first I create a two link level network with this command
    exec sdo_net.spatial_partition('ITALIA_SPAZIO', 'ITA_SPAZIO_P_TABLE', 5000, 'LOAD_DIR', 'sdlod_part.log', 'w', 1);
    exec sdo_net.spatial_partition('ITALIA_SPAZIO', 'ITA_SPAZIO_P_TABLE', 60000, 'LOAD_DIR', 'sdlod_part.log', 'w', 2);
    exec sdo_net.generate_partition_blobs('ITALIA_SPAZIO', 1, 'ITA_SPAZIO_P_BLOBS_TABLE', true, true, 'LOAD_DIR', 'sdlod_part_blob.log', 'w', false, true);
    exec sdo_net.generate_partition_blobs('ITALIA_SPAZIO', 2, 'ITA_SPAZIO_P_BLOBS_TABLE', true, true, 'LOAD_DIR', 'sdlod_part_blob.log', 'w', false, true);
    Then I try with a single level network but I got the same error.
    Please can samebody help me?

    I find the solution to this problem.
    In the LODConfig.xml file I have:
    <readPartitionFromBlob>true</readPartitionFromBlob>
                   <partitionBlobTranslator>oracle.spatial.network.lod.PartitionBlobTranslator11g</partitionBlobTranslator>
    but when I change it to
    <readPartitionFromBlob>true</readPartitionFromBlob>
                   <partitionBlobTranslator>oracle.spatial.network.lod.PartitionBlobTranslator11gR2</partitionBlobTranslator>
    The application starts without the obove mentioned error.

  • EPrint - Message Exceeded Size Limit

    I received an error message using ePrint that message exceeded size limit.  It seems that 5MB is the limit.  Are there solutions for larger files, i.e. winzip?  Thanks.

    Hi Igcgf,
    As of right now the size limit is 5 MB as reviewed here about size limits. Also, a winzip file, which may hold supported file types such as PDFs, is not a supported file to be opened up and sorted through for printing purposes. It is best to take files out of the winzip file and attach to an outgoing email to your ePrint address.
    If I have solved your issue, please feel free to provide kudos and make sure you mark this thread as solution provided!
    Although I work for HP, my posts and replies are my own opinion and not those of HP.

Maybe you are looking for

  • What is the best way to open close and pass instrument handles from labview in teststand parallel model?

    I have a number of test systems that use a parallel model with labview. We have a good number of instruments(PXI). What is the prefered method for open,closing and passing instrument handles in teststand using labview?  Solved! Go to Solution.

  • Creative Cloud will not load and it is very frustrating

    I have recently downloaded Creative Cloud to download photoshop CC. When I open creative cloud all I see is a blue loading circle constantly spinning and it won't load. I am very frustrated with this and I hope to fix it soon. Thanks in advance for a

  • Get agents gives "unauthorized error" in system prep

    In solman 7.1 in system prep, in connect diagnostics agents,  when I do a "get agents" I get the following error?            SOAP:1,007 SRT: Unsupported xstream found: ("HTTP Code 401 : Unauthorized")            Web service invocation problem on host

  • PPro CS6:  Source Monitor Dropping Frames

    Greetings, Currently I'm working off an iMac, OSC 10.8.4, 12 GB RAM, with i7, and ATI Radeon 1GB video card.  I haven't noticed it recently until today for some reason, but my source monitor keeps dropping frames like crazy after about 5 seconds of p

  • G++ isn't compiling!

    So, I noticed this yesterday, thinking it was application specific. Now I've tried compiling with multiple pieces of code that I've written, all of which compiled successfully at most a week ago. The following paste is from an attempt on a simple exa