Hash # in url

APEX version 3.1.2.00.02
I have a interactive report with a few columns of which one is a link. The link is created from page items and column values. The URL for the link might look like
f?p=&APP_ID.:8:&SESSION.::&DEBUG.:RP:P8_NAME,P8_SHOW:#NAME#,&P7_SHOW.
The problem is that I don't explicitly deal with hashmarks. Once a while a get a name such as bill#gates which in the example above results in P8_SHOW not being set and P8_NAME being set to bill only. And yes the link has "Display Text As" set to "Display as Text (escape special characters)".
Cheers Niklas

Niklas,
Thanks for pointing that out. The hashmark is deliberately preserved in links so that a hashmark used as a fragment delimiter can be part of the URL. When the hashmark appears within an item value, however, this doesn't have the desired effect. This puts the hashmark in the same category as a colon -- one that cannot be part of an item value in a URL. (This limitation applies only to links that use apex_util.prepare_url, as most links do. However, you could use PL/SQL to dynamically emit a link that passed an escaped hashmark in an item value.)
I'll put this on a defect list for future improvements.
Scott

Similar Messages

  • Hashing URLs

    Is there anyway to use the ACE to hash urls? It doesn't seem to be possible, but I was thinking maybe there was some kind of workaround.
    Many CDNs offer the service of hashing an URL and changing it at a fixed interval to offer a little extra protection against deep linking to content. It would be great to offload this all from our redirector cgi directly onto the load balancer

    a loadbalancer does not modify http content.
    So you can't replace the header.
    Although it does sound like a good feature to add, you will kill the performance of the box by having the check all url, replace it where needed, recompute the tcp checksum, fragments/reassembles the packets where needed,... a total pain to achieve.
    Gilles.

  • What's the best strategy for storing URLs as keys?

    I need to create an index that maps URLs to 8-byte longs. I know that I can just use StringBinding and LongBinding to create my entries, however I'm wondering if there's anything that I could do to make more efficient use of memory. I need quick lookups, but I also need to store a lot of data (600M URLs and growing). The average length of a URL is about 90 characters.
    Since I don't know what BDB actually keeps in memory at runtime, I'm wondering what advantage there is (if any) from using hashes for the keys instead of the full URLs. I'd have to store the URL in the data to distinguish hash collisions. There's obviously a trade-off between the size of the hash output and the number of collisions. I can't imagine that the lookup time would suffer that much if I have to iterate through all of the records that hash to a particular key, but I'd like to know if you have a reason to contradict that assumption.
    In summary, I'd like to know what BDB would prefer from a performance standpoint (both size and speed):
    + smallest key/data records (store URL as key, long as data)
    + smaller keys/larger data/minimal collisions (store 160-bit hash of URL as key, URL and long as data)
    + even smaller keys/same data/more collisions (store 64-bit hash of URL as key, URL and long as data)
    The key size would also be much more consistent if I use a hash function. I don't know if BDB cares about that.
    I know that every situation is different and that I could just run my own performance tests (and I will), but it takes a few days to load the data into the index, so I'd like to call on the experience here to avoid making any really bad decisions and save me some time.
    Thanks,
    -Justin

    Hi Justin,
    I know you're not looking for an "it all depends" answer, but it does depend largely on how much of the data set you're accessing fits into the JE cache. If you're not sure about this and can't measure it by running your app and looking at the EnvironmentStats, please run the DbCacheSize utility. There's an FAQ about it, and several OTN threads.
    If the large majority of the data set you access frequently fits into the cache, you'll get maximum performance with the first option you mention (no hashing), since you don't have to filter out hash collisions.
    If I/O to read from disk is a big factor (because the JE cache isn't large enough), then you may be able to reduce this problem by trying to fit more into the JE cache.
    Reducing the key size by using a hash is one way to fit more data into cache, since the keys are stored multiple times in the Btree internal nodes. However, unless the records with the same hash are likely to have been read recently, reading the colliding records will result in more random reads, and that will probably outweigh the advantage. JE stores each record separately on disk in write order, so records with the same hash key will not be stored in "pages" together -- JE has no pages.
    So instead, you may want to try configuring key prefixing, if the URLs are likely to have common prefixes. See DatabaseConfig.setKeyPrefixing.
    If that isn't sufficient, you could use a hash key and store all the records with that hash in a single record. In other words, you could store multiple logical records per JE record. This is quite a bit more work on your part, since you have to handle the packing and unpacking, as well as deletions, etc. For that reason I don't recommend it, but we have had some users do this successfully.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Stuck while using Light Framework feature of Portal

    Hi All,
    We are using the light framework feature of portal, having one level navigation layer, provided as tabs in the Top Level. The corresponding pages,iviews are being displayed in the detailed navigation layer.So far Further, navigation to iviews or pages present in different workset/page is being done using the Hashed short URL of the perticular iview.
    Following are the things where we are stuck,
    1)We have two webdynpro iviews available under different tabs.We can navigate between them but not able to pass any parameters. This is where we are stuck. We tried using passing it along with the short uRL, setting in cookie, nothing seems to be working...Also we We do not want to use EPCM navigation, so that we can avoid navigation using pcd location.
    2) For some of the tabs in the first level, there is no detailed navigation i mean no inner pages/iviews. When we click on it, the page seems to be getting shifted towards right with blank space on the left side. This blank space seems to be occupied by the detailed navigation with no contents in it.the iviews are aligned properly if DTN has some links in it.
    Please help, its bit urgent.

    Hi Sujesh,
    Thanks a lot for your answers.
    1) Yah i know that params sharing in webdynpro is done thru context variables..What i want is to pass params from one WD iview to another WD iview embedded in the portal in differen worksets. SO in this case, context sharing can not be used, if i am correct. So for that we tried setting values in cookies/sessions, but of no help.Whats the correct ways to do it. Please help
    2) For that tab, there is no further navigation, only one page,added as the top entry with some ivews embedded in it. When we open the tab, blank space of DTN size comes in the left.
    Please help us..
    Regards,

  • Pulling Documents from Content Server

    I'm trying to write a .NET webservice that will retrieve documents from the SAP Content Server.
    I understand that I need to build a URL to the content server to retrieve each document, and that part of this URL is a security key 'secKey'.
    I've such documents as:
    http://help.sap.com/saphelp_nw2004s/helpdata/en/f5/9561c8f3b111d1955b0000e82deb58/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/9b/e8c192eaf811d195580000e82deb58/content.htm
    And everything seems fine, except for how to create the 'secKey'. I can hash the URL parameters as required, but I don't know how to sign it - how do I obtain the sender's private key in order to do this?
    Any help with this would be greatly appreciated!

    Sorry, I realise that this is a Java forum, and I specified that I'm writing in .NET.
    The reason I'm posting here is that if I can't do it in .NET , I'm happy to use Java instead.

  • Using ACE for proxy server load balancing

    Hello groups,
    I wanted to know your experiences of using ACE for proxy server load balancing.
    I want to load balance to a pool of proxy servers. Note: load-balancing should be based on the HTTP URL (i can't use source or dest. ip address) so that
    a certain domain always gets "cached/forwarded" to the same proxy server. I don't really want to put matching
    criteria in the configuration (such as /a* to S1, /b* to S2, /c* to S3,etc..), but have this hash calculated automatically.
    Can the ACE compute its own hash based on the number of "online" proxy servers ? ie. when 4 servers are online, distribute domains between 1,2,3,4 evenly.
    Should server 4 fail, recalculate hash so that the load of S4 gets distributed across the other 3 evenly. Also load-balancing domains of S1 ,S2 and S3 should not change if S4 fails.....
    regards,
    Geert

    This is done with the following predictor command:
    Scimitar1/Admin# conf t
    Enter configuration commands, one per line.  End with CNTL/Z.
    Scimitar1/Admin(config)# serverfarm Proxy
    Scimitar1/Admin(config-sfarm-host)# predictor hash ?
      address         Configure 'hash address' Predictor algorithms
      content         Configure 'hash http content' Predictor algorithms
      cookie          Configure 'hash cookie' Predictor algorithms
      header          Configure 'hash header' Predictor algorithm
      layer4-payload  Configure 'hash layer4-payload' Predictor algorithms
      url             Configure 'hash url' Predictor algorithm
    Scimitar1/Admin(config-sfarm-host)# predictor hash url
    It does hash the url and the result takes into account the number of active proxies dynamically.
    This command has been designed for this kind of scenario that you describe.
    Gilles.

  • Large Java Cache Solutions

    I am implementing a web crawler which is expected to create an index of approximately 100,000 webpages. My current implementation has worked fine until I approached an index of approximately 30,000 webpages. The problem is being able to supply several web crawler "clients" with urls to crawl at a rate faster than the "clients" can actually crawl the webpage. Currently my database of urls to crawl is getting emptied by crawler clients at a faster rate than I can repopulate it.
    The problem is for each new url I find, I need to check it against the entire index of "crawled urls" to ensure its a "new" url (I dont want to crawl a url more than once). My current implementation uses db4o and each look up, with a database of 30,000 urls, takes approx 0.8 seconds.
    My question is, would it be better to keep a large scale cache of "crawled urls", rather than querying db4o each time I need to check if a url has been previously crawled?
    I have found two possible cache systems:
    http://jakarta.apache.org/jcs/index.html
    and
    http://ehcache.sourceforge.net/
    Before I go about trying to work out how these systems work I just need to get some opinions on whether this is a correct solution to my problem.
    Also if anyone has had any previous experience with these, or similar, systems could you point me in the direction of a good "learners" tutorial! They both seem rather complexat first glance.
    Thanks

    maybe try hashing the url down to a 16ish bit integer, then when you're looking for a url, first hash it, select everything in your table with that integer hash, and then search that subset for your url.
    Like,
    SELECT * FROM (SELECT * FROM urls WHERE urls.hash=myhashfunction('www.someurl.com')) WHERE urls.name='www.someurl.com';Instead of the embedded select statement you could perhaps have a stored view, and you may want to have the hash function be a client-side thing instead of a stored function.
    I'm not really into database optimization though.
    EDIT: to clarify, your insert would be:
    INSERT INTO urls (hash,name) VALUES (myhashfunction('www.someurl.com'), 'www.someurl.com');Edited by: endasil on Oct 17, 2007 6:23 AM

  • Office 2010 & hash in webdav url

    Hi,
    it seems that when opening a document through webdav has a problem with hash (#) characters in a URL.
    This behaviour was noticed on Office 2010 64 bit (version 14.0.5123.5000) on Windows 7 SP1.
    Suppose a word  file is in a webfolder /folder1/a # b/file.doc. (http://server/folder1/a%20%23%20b/file.doc). I have an explorer view on the folder without trouble. I can also open txt-docs with notepad.
    If I try to open the doc-file, Word 2010 gives an error that it can't open file "http://server/folder1/a".
    I can see also with Fiddler that a HEAD-request is done on http://server/folder1/a%20 and not on http://server/folder1/a%20%23%20b/file.doc.
    The User-agent that performs the GET is " Microsoft Office Existence Discovery".
    Can anyone confirm this behaviour or have a possible solution?

    Hi
    Thank you for using
    Microsoft Office for IT Professionals Forums.
    From your description, it sounds like an issue related to “How to configure Internet Explorer to open Office documents in the appropriate Office program
    instead of in Internet Explorer”.
    To fix this problem automatically, click the following link to download Fix it tool. Click Run in the File Download dialog box, and follow the steps in the Fix
    it wizard.
    http://go.microsoft.com/?linkid=9729249
    Notes
    This wizard may be in English only; however, the automatic fix also works for other language versions of Windows.
    If you are not on the computer that has the problem, save the Fix it solution to a flash drive or a CD and then run it on the computer that has the problem.
    If not working, please try the other methods in following article:
    http://support.microsoft.com/kb/162059
    Please take your time to try the suggestions and let me know the results at your earliest convenience. If anything is unclear or if there is anything I can
    do for you, please feel free to let me know.
    Sincerely
    Rex Zhang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

  • URL Hashing + Sticky, not optimal.

    Gurus,
    Had just deployed an ACE for my customer for the caching servers.
    We used URL hash and Sticky , after some monitoring we find that 1 server gets 15K connections, another one gets only 5K.
    Now its to be understood that due to the erratic internet behavior of users the URLs cached are getting accessed in this disproportionate way.
    Now, for cache servers the URL hash is the best predictor, can we try anything else or its going to be as it is.
    Have advised for "application request/response predictor" for in case the servers get loaded beyond a point.
    let me know your views.
    Cheers
    Shukla.

    Gilles,
    Appreciate your reply, had advised the client the first time, still will havent tried ..
    couple of things :
    1. If i do max-conn with URL hashing, if user-A is going for url-A which is in cache-A and ACE has reached its max-conn value, will it for the same URL direct is towards cache-B with a new entry of url-A (which will in turn start making multi servers cache the same URL bcos of max conns) ??
    2. the second hash method we proposed was for "applic request/response" , im not sure how this predictor will work or be beneficial against URL hash.
    and im hunting for a doc which explain these predictors in depth ... not just the definitions. As such insights give u an edge when u don't have a test bed for yourself.
    awaiting optimistically.
    Cheers
    Shukla.

  • Exchange 2013 outlook server name is my owa URL not hash

    i was migrating from exchange 2003 to 2010 to 2013.
    after the migration and i install the exchange 2013 with exchange 2010 and configured it. the outlook not working with users that migrated to exchange 2013, and working well with the users already existing on exchange 2010.
    when i try to fix this issue i set my outlook provider using this commend   Set-OutlookProvider EXPR -CertPrincipalName
    "msstd:mail.mydomain.com"
    but unfortunately it's impacted the server name in client outlook.
    then i removed the outlook provider and create it again. but with no change. then i uninstall the CAS server and install it again but with now change too.
    now the outlook server name is my URL "mail.mydomain.com" and i need to recover it to be hash like "[email protected]"
    the outlook client version outlook 2010 sp2 & and outlook 2013

    Hi,
    Is there any error when you connect your Exchange 2013 mailbox in Outlook? Also confirm if the problematic users can access Exchange 2013 mailbox from OWA.
    Generally, Outlook uses Autodiscover service to auto setup Exchange account in Outlook. We can check the Autodiscover service in Exchange 2013 (mail.mydomain.com is pointed to Exchange 2013):
    Get-ClientAccessServer | FL Identity,fqdn,*autodiscover*
    Set-OutlookAnywhere -Identity "Exch13\Rpc (Default Web Site)" -InternalHostname mail.mydomain.com -ExternalHostname mail.mydomain.com -InternalClientAuthenticationMethod Ntlm -ExternalClientAuthenticationMethod Basic -ExternalClientsRequireSsl
    $True -InternalClientsRequireSsl $true
    Also make sure the name which is used in the AutoDiscoverServiceInternalUri has been included in your Exchange certificate with
    IIS service. We can restart IIS service by running IISReset /noforce from a Command Prompt window.
    Regards,
    Winnie Liang
    TechNet Community Support

  • ACE URL Hash

    Hi All
    I had an issue with ACE 2 year before where..sending all youtube traffic to same cache while using URL hash. I had below response from Cisco TAC..
    Any1 knows if the new image resolved this...?
    Regarding your question about the used predictor and "splitting" the requests going to youtube to be handled by two caches, please note that the URL hashing will hash the domain name up to the "?" only, so we unfortunately cannot distinguish the caches to which to send the request when using this predictor method. The "?" is the default URL parsing delimiter.
    Therefore, what we could try is changing the predictor method to another type, for example hash destination|source address or round robin to verify if the loads gets distributed among the caches more evenly.
    There, we can see that you can specify a begin- and end-pattern to look
    for a specific pattern within an URL, too, however, as already stated,
    the hashing has no effect after the "?".
    Regards
    Sameer Shah

    The ACE module and ACE 4710 appliance were enhanced so that the url hashing predictor now includes the url query seen after the '?' delimiter.
    ACE module: A2(2.1) and later (BugID CSCsq99736)
    ACE 4710: A3(2.2) and later (BugID CSCsr30433)

  • [SOLVED] rc.xml: & ampersand and # hash break Openbox keybinding URL's

    I'm trying to get a keybinding to work in Openbox's rc.xml, so that when I press W-XF86Mail it opens a Google documents spreadsheet.
    rc.xml:
    <keybind key="W-XF86Mail">
    <action name="Execute">
    <execute>firefox https://spreadsheets.google.com/ccc?key=0AjuCrdYnTCyacGNfX0FuRzBFUnBOalNxS0VXYjJhVUE&hl=en#gid=3</execute>
    </action>
    </keybind>
    Other keybindings in the same file work fine. But when I reconfigure Openbox I get an error message "One or more XML syntax errors were found while parsing the Openbox configutation files...". After reconfiguration, I press W-XF86Mail and it starts Firefox but opens the wrong URL - the first bit is correct but the last bit reads ...key=0AjuCrdYnTCyacGNfX0FuRzBFUnBOalNxS0VXYjJhVUE=en instead. As you can see the last bit &hl=en#gid=3 is missing. If I delete the & ampersand then the hash # character seems to cause the same problem. Is there a problem with using the ampersand  or hash characters in keybindings?
    Last edited by dameunmate (2010-09-04 14:24:50)

    thanks for the tips, I've now got the right page to load when I type the command into the urxvt terminal. But it still doesn't work with Openbox keybindings, I edit and reconfigure but press the keys and get no response at all. I've checked and the same keybinding will successfully open other programs and webpages. Here's the keybinding in /home/username/.config/openbox/rc.xml:
    <keybind key="W-XF86Mail">
    <action name="Execute">
    <execute>urxvt -e gdocspread</execute>
    </action>
    </keybind>
    I've also tried <execute>urxvt -c gdocspread</execute> and simply <execute>gdocspread</execute> to no avail. However changing the same keybinding command to <execute>urxvt</execute> does open urxvt successfuly.
    So the keybinding will open urxvt, and if I manually open a urxvt terminal and type gdocspread firefox starts on my desired web page. So what's the missing link?
    NB: About the ampersand:
    For some reason even from the terminal it won't work with the ampersand, but I discovered that I can leave that out as it just defines the language. However for future reference it would be useful to know how to put an ampersand in a URL. This is the code that does load the correct page:
    home/username/.bashrc
    # Check for an interactive session
    [ -z "$PS1" ] && return
    alias ls='ls --color=auto'
    PS1='[\u@\h \W]\$ '
    gdocspread() {
    firefox https://spreadsheets.google.com/ccc?key=0AjuCrdYnTCyacGNfX0FuRzBFUnBOalNxS0VXYjJhVUE&amp;hl=en#gid=3
    Interestingly the URL etc...key=0AjuCrdYnTCyacGNfX0FuRzBFUnBOalNxS0VXYjJhVUE&hl=en#gid=3 and the URL etc...key=0AjuCrdYnTCyacGNfX0FuRzBFUnBOalNxS0VXYjJhVUE&amp;hl=en#gid=3 won't load, the URL gets cut off at the ampersand.
    Furthermore it's interesting to note that in the above configuration file /home/username/.bashrc only the ampersand causes problems, but in /home/username/openbox/rc.xml both the ampersand and the hash characters cause the URL to be cut off.

  • BrowserLab chokes on URLs with a hash (#)

    BrowserLab seems to have trouble with URLs that contain a hash (#).
    For example, it loads http://www.stagebloc.com/faq/ just fine, but if I try to load http://www.stagebloc.com/faq/#faq_01 (simple anchor link), it renders part of the page then says "Adobe BrowserLab was not able to load the URL requested."
    If I try to load http://www.stagebloc.com/#/how/ (AJAX navigation that can be linked directly, bookmarked, etc.), it fails completely, stating "This screenshot cannot be loaded because the URL is invalid. Please fix it and try again."

    Hey Mike,
    It looks like BrowserLab is choking on the way your navigation works, with the JavaScript rewriting the window location. Running your site through BL with the JavaScript disabled (but still with the hash in the URL) doesn't look gross, and running various other sites with a hash in the URL work fine.
    I guess JavaScript opens up a whole new can of worms.
    It looks like there is something wrong though, since it thinks http://stagebloc.com/#/how/ is invalid.
    Reed

  • Hash Symbol In URL

    Hi guys, simple question. When I deploy and run my app, I
    load it up in the web browser using the url;
    http://domain.co.uk/myproject/myproject.html
    However the flash player seems to add a # symbol at the end
    of the url. For example;
    http://domain.co.uk/myproject/myproject.html#
    of course internet explorer goes mad when it sees this. I
    cant seem to figure out why this is there or how to get rid of it.
    Its causes a problem because if anyone wants to bookmark the page
    they can't because of the hash symbol. Does anyone know of a
    workaround or a solution.
    Thanks
    Jaz

    Did you find a solution to this problem? does anyone else know the cause?
    When I access a url www.someurl.com/
    the flex app starts to initialise .. when it completes I always end up at
    www.someurl.com/#  ... I cant figure out why this happens or how to stop it? anyone please help

  • I need an extension that removes hash (#) sign from Facebook URLs

    You start browsing in Facebook here:
    http://www.facebook.com/home.php
    Then you click a link in the page, and the address bar becomes like this:
    http://www.facebook.com/home.php#/video/video.php?v=xxxxxxxxxxxxxxx
    I understand that this is because Facebook uses AJAX and doesn't reload the page.
    Is there any extension that prevents this behavior, and makes Facebook reload the page every time I click a link?
    I want to resulting URL to be like this:
    http://www.facebook.com/video/video.php?v=xxxxxxxxxxxxxxx
    Looking for your reply...

    You may have an extension that causes this to happen in the first place.
    Start Firefox in [[Safe Mode]] to check if one of your add-ons is causing your problem (switch to the DEFAULT theme: Tools > Add-ons > Themes).
    See [[Troubleshooting extensions and themes]] and [[Troubleshooting plugins]]
    If it does work in Safe-mode then disable all your extensions and then try to find which is causing it by enabling one at a time until the problem reappears.
    You can use "Disable all add-ons" on the ''Safe mode'' start window.
    You have to close and restart Firefox after each change via "File > Exit" (on Mac: "Firefox > Quit")

Maybe you are looking for

  • Trip expense report settlement in case of Inactive Employee

    Hi T&E Experts, We are facing a typical problem, in settlement of inactive employee trip expense report. Details are as below: - One of the employee moved from 1 Organisation to other. His personnel number status in old organisation is showing as "Wi

  • Hierarchy selection using a dropdown in IP web template

    Greetings, Is it possible to have a dropdown on my IP template where I could see all the 0COSTCENTER stored hierarchies ? What I'm really looking for is replicating the functionality of the dropwdown found in: In the query - > Click on the Char - In

  • Is there a way for it to *not* sleep?

    is there a way to set the macbook not to sleep automatically when the clamshell is closed? so that i could, say for instance, listen to music or perhaps one day connect to and use the cinema display without having to leave my macbook sitting open? (i

  • Display date data type in HFR

    hi I have a date datatype in Essbase, when we populate a date, in HFR, it is showing some weird data. Can someone pls help me for the same. thanks

  • NAT not functioning as expected

    I am trying to build a lab network that will simulate our production network using a pair of brand new Cisco 1921 routers. However when I setup the lab side with static nat neither the Lab Router nor the Production router will properly translate the