Collection caching - best practices

Hi all
I have a list of rows in a db that are ultimately added to a html <select>
field
This select field is on every page of my site
My client calls a method in a SLSB which
- calls a finder method
- is returned a collection of read only EJB local interfaces
- copies the contents of the local interfaces into plain java beans
- returns a collection of plain java beans to the client
Other than making my ejb read-only, what best practices should i consider so
that i can minimise the amount of work that is involved each time i want to
build this <select> field?
Specifically, I was wondering what best practices people are implementing to
cache collections of information in an EJB environment. I would like to
minimize the amount of hand rolled caching code i have if possible.
Thanks
Matt

thanks
i just wanted to make sure
"Cameron Purdy" <[email protected]> wrote in message
news:[email protected]..
I have a list of rows in a db that are ultimately added to a html<select>
field
This select field is on every page of my site
My client calls a method in a SLSB which
- calls a finder method
- is returned a collection of read only EJB local interfaces
- copies the contents of the local interfaces into plain java beans
- returns a collection of plain java beans to the client
Other than making my ejb read-only, what best practices should iconsider
so
that i can minimise the amount of work that is involved each time i wantto
build this <select> field?
Specifically, I was wondering what best practices people are
implementing
to
cache collections of information in an EJB environment. I would like to
minimize the amount of hand rolled caching code i have if possible.For read-only lists, use a singleton pattern to cache them.
For caching data in a cluster, consider Coherence:
http://www.tangosol.com/coherence.jsp
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Matt Krevs" <[email protected]> wrote in message
news:[email protected]..
>

Similar Messages

  • Using cache - best practice

    We created a banner using toplink. The buttons (images) are all stored in the database. The banner is in 300+ pages and they cange depending on some business rules. I want the best performance available. Yet, I want to make sure that once a record is updated so is the banner. What is the best cache option to use? Also, where should I set the cache? If I click Toplink Mapping (I'm using Jdeveloper 10.12), should I double-click on the specific mapping? I only see two options:
    - default (checked)
    - always refresh
    -only refresh if newer version
    Is there some type of "best practices" of using Toplink's cache?
    Thanks,
    Marcelo

    Hello Marcelo,
    Can't be sure exactly, but are you modifying the database outside of TopLink? This would explain why the cached data is stale. If so, what is needed is a strategy to handle refreshing the cache once changes are known be made outside TopLink, or revise the caching strategy being used. This could be as easy as calling session.refreshObject(staleObject), or configuring specific class descriptors to always refresh when queried.
    Since this topic is rather large and application dependent, I'd recommend looking over the 10.1.3 docs:
    http://download-west.oracle.com/docs/cd/B25221_01/web.1013/b13593/cachun.htm#CHEJAEBH
    There are also a few other threads that have good discussions on how to avoid stale data, such as:
    Re: Only Refresh If Newer Version and ReadObjectQuery
    Best Regards,
    Chris

  • CS3 Bridge Cache Best Practices

    I am using CS3 Bridge Version 2.1.1.9 and have a question(s) about the cache.
    (System specs: Vista 64bit SP1, CPU: Quad9550, Ram: 4GB, Video: 9800GT)
    When and why should I purge the cache?
    Whenever I re-enter Bridge and go back to a recent folder of images, the spinning "pizza wheel" reappears for a few seconds and seems to refresh the thumbnails.  Why could it not do this once and keep the refreshed thumbnails?
    By the way, I just recently switched the Preference from High Quality to Quick Thumbnails and that change gave me a huge boost in speed.  The High Quality thumbnails were incredibly SLOW which once again begs the question just above, why does it not do it once?
    Thanks
    Jim Calvert

    As I mentioned in post #4 my thumbnails stay for several weeks (I use quick thumbs).
    I have a dedicated scratch partition of 40 gigs (shared with photoshop).   My Bridge catch was 14 gigs this morning, and I elected to compact thumbnails and that process reduced cache down to 9 gigs.  Compacting eliminates cached items that no longer are linked to anything.  Folders that had not been visited for at least a week still had the thumbnails there, so no rebuilding.  I use only central cache (do not export to folders).
    The point of this is how big is your cache?  Does it have to be overwritten to make space for other activity?   The only other thing I can think of is that if you are using a lot of raw files it has a seperate cache.  I really do not understand this, or what purpose it serves.  But under edit/Camera Raw Preferences there is a box to set location and size of camera raw cache.  Default is 1 gig, and it has been recommended by some that this should be increased to 4 to 10 gigs, depending on you use of ACR and file sizes.
    My thumbs load instantly (unless they have to be rebuilt).  I have between 400-800 pictures in a folder.
    I am using CS3.

  • Best practice - caching objects

    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

    problem with using object id fields instead of PC object references in your
    object model is that it makes your object model less useful and intuitive.
    Taking to the extreme (replacing all object references with their IDs) you
    will end up with object like a row in JDBC dataset. Plus if you use PM per
    HTTP request it will not do you any good since organization data won't be in
    PM anyway so it might even be slower (no optimization such as Kodo batch
    loads)
    So we do not do it.
    What you can do:
    1. Do nothing special just use JVM level or distributed cache provided by
    Kodo. You will not need to access database to get your organization data but
    object creation cost in each PM is still there (do not forget this cache we
    are talking about is state cache not PC object cache) - good because
    transparent
    2. Designate a single application wide PM for all your read-only big
    things - lookup screens etc. Use PM per request for the rest. Not
    transparent - affects your application design
    3. If large portion of your system is read-only use is PM pooling. We did it
    pretty successfully. The requirement is to be able to recognize all PCs
    which are updateable and evict/makeTransient those when PM is returned to
    the pool (Kodo has a nice extension in PersistenceManagerImpl for removing
    all managed object of a certain class) so you do not have stale data in your
    PM. You can use Apache Commons Pool to do the pooling and make sure your PM
    is able to shrink. It is transparent and increase performance considerably
    One approach we use
    "Gary" <[email protected]> wrote in message
    news:[email protected]...
    >
    What is the best practice when many transactions requires a persistent
    object that does not change?
    For example, in a ASP model supporting many organizations, organization is
    required for many persistent objects in the model. I would rather look the
    organization object up once and keep it around.
    It is my understanding that once the persistence manager is closed the
    organization can no longer be part of new transactions with other
    persistence managers. Aside from looking it up for every transaction, is
    there a better solution?
    Thanks in advance
    Gary

  • Best practice for statistic collection in 10gR2

    Hello,
    In Oracle 9i, I had to run every day scripts to calculate statistics.
    But in Oracle 10gR2, the collection of statistics has become automatic...I would like some feebback from your experience:
    1/ Does it work well?
    2/ Do I still need to run my own scripts to calculate statistics as in 9i?
    3/ What are the options or parameter to activate to get the best result in 10gR2
    4/ Please, give me the best practice to implement the automatic statistic collection in 10gR2
    Thanks for your help

    Christophe CHANEMOUGANADIN wrote:
    Some more question on this topic :
    a/ what happens when a empty table is loaded suddenly and there is a query on that...is the statistic calculated at once?no, it would be normal to run dbms_stats manually as part of significant loads
    b/ is there any timing for Oracle to launch the statistic collection? the default schedule for maintenance jobs including stats is 10pm - (IIRC) 8am every night and all weekend i.e weeknights and weekends.
    c/ is it possible to launch statistic collection only the night when the load is low?see above.
    Niall

  • Best practice for taking Site collection Backup with more than 100GB

    Hi,
    I have site collection data is more than 100 GB. Can anyone please suggest me the best practice to take backup?
    Thanks in advance....
    Regards,
    Saya

    Hi
    i think Using powershell script we can do..
    Add this command in powershell
    Add-PSSnapin Microsoft.SharePoint.PowerShell
    Web application backup & restore
    Backup-SPFarm -Directory \\WebAppBackup\Development  -BackupMethod Full -Item "Web application name"
    Site Collection backup & restore
    Backup-SPSite http://1632/sites/TestSite  -Path C:\Backup\TestSite1.bak
    Restore-SPSite http://1632/sites/TestSite2  -Path C:\Backup\TestSite1.bak -Force
    Regards
    manikandan

  • Best practice to have cache and calclockblock setting?

    Hi,
    I want to implement hyperion planning.
    What should be the best practice to set essbase settings for optimize performance?

    Personally, I would work out the application design before you consider performance settings. There are so many variables involved that to try to do it upfront is going to be difficult.
    That being said each developer has their own preferred approach and some will automatically add certain expressions into the Essbase.cfg file, set certain application level settings via EAS (Index Cache, Data Cache, Data File Cache).
    There are many posts discussing these topics in this forum so suggest you do a search and gather some opinions.
    Regards
    Stuart

  • Best practice for lazy-loading collection once but making sure it's there?

    I'm confused on the best practice to handle the 'setup' of a form, where I need a remote call to take place just once for the form, but I also need to make use of this collection for a combobox that will change when different rows in the datagrid or clicked. Easier if I just explain...
    You click on a row in a datagrid to edit an object (for this example let's say it's an "Employee")
    The form you go to needs to have a collection of "Department" objects loaded by a remote call. This collection of departments only should happen once, since it's not common for them to change. The collection of departments is used to populate a form combobox.
    You need to figure out which department of the comboBox is the selectedIndex by iterating over the departments and finding the one that matches the employee.department.id
    Individually, I know how I can do each of the above, but due to the asynch nature of Flex, I'm having trouble setting up things. Here are some issues...
    My initial thought was just put the loading of the departments in an init() method on the employeeForm which would load as creationComplete() event on the form. Then, on the grid component page when the event handler for clicking on a row was fired, I call a setup() method on my employeeForm which will figure out which selectedIndex to set on the combobox by looking at the departments.
    The problem is the resultHandler for the departments load might not have returned (so the departments might not be there when 'setUp' is called), yet I can't put my business logic to determine the correct combobox in the departmentResultHandler since that would mean I'd always have to fire the call to the remote server object every time which I don't want.
    I have to be missing a simple best practice? Suggestions welcome.

    Hi there rickcr
    This is pretty rough and you'll need to do some tidying up but have a look below.
    <?xml version="1.0"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">
        <mx:Script>
            <![CDATA[
                import mx.controls.Alert;
                import mx.collections.ArrayCollection;
                private var comboData:ArrayCollection;
                private function setUp():void {
                    if (comboData) {
                        Alert.show('Data Is Present')
                        populateForm()
                    } else {
                        Alert.show('Data Not')
                        getData();
                private function getData():void {
                    comboData = new ArrayCollection();
                    // On the result of this call the setUp again
                private function populateForm():void {
                    // populate your form
            ]]>
        </mx:Script>
        <mx:TabNavigator left="50" right="638" top="50" bottom="413" minWidth="500" minHeight="500">
            <mx:Canvas label="Tab 1" width="100%" height="100%">
            </mx:Canvas>
            <mx:Canvas label="Tab 2" width="100%" height="100%" show="setUp()">
            </mx:Canvas>
        </mx:TabNavigator>
    </mx:Application>
    I think this example is kind of showing what you want.  When you first click tab 2 there is no data.  When you click tab 2 again there is. The data for your combo is going to be stored in comboData.  When the component first gets created the comboData is not instansiated, just decalred.  This allows you to say
    if (comboData)
    This means if the variable has your data in it you can populate the form.  At first it doesn't so on the else condition you can call your data, and then on the result of your data coming back you can say
    comboData = new ArrayCollection(), put the data in it and recall the setUp procedure again.  This time comboData is populayed and exists so it will run the populate form method and you can decide which selected Item to set.
    If this is on a bigger scale you'll want to look into creating a proper manager class to handle this, but this demo simple shows you can test to see if the data is tthere.
    Hope it helps and gives you some ideas.
    Andrew

  • Best Practice Site Collection vs SubSite

    Anyone can give me best practice on create site collection vs subsite? I generally have request for create site ,but struggle to deal with what is best. here what I have so far
    projects.contoso.com (web app and top level site collection)
    teams.contoso.com (web app and top level site collection)
    now let say I have user request for create a project site (project) that have unique security. I can take two route
    1) create a separate site collection -  projects.contoso.com/sites/projectX
    or
    2) create subsite -  projects.contoso.com/projectX (I have to break inherited here since it need it own group or individual user)
    I see both problem as I have more request and more request.
    Note: there are no other specific requirement like (specific feature, backup, sensitive data...)
    Any Thought?

    In my opinion, you create a sub-site when at least one of the groups of the root site can have access to the new sub-site.
    Otherwise, you should create a separate site collection. Every site in a site collection can see or use all the groups created in that site collection so it's very confusing and disorganized to have sites with no common security in the
    same site collection.
    And yes you might regret your decisions sometimes but it's impossible to know how a site will evolve.
    Don’t forget that by using multiple site collections:
    You won’t have a common navigation
    You cannot share site columns + content types unless you use a content type hub
    You cannot share list templates, site templates
    The content query web part is not working within multiple site collections
    It’s not possible to copy / cut/ paste using Content and structure from one site collection to another
    Usage report? (I would have to verify in SharePoint 2013)
    You can add a space quota to each site collection
    Each site collection can have different features
    Each site collection can be associated to one content database
    Hope it helps.

  • Best practice for IE cache

    Hi, all.
    Over the weekend, we applied the SPS17 to the ECC6.0 server running on dual stack. We also updated the HCM EHP3 to stack 6.
    We have a lot of WD for ABAP and WD for JAVA applications running on the ECC dual stack server. The contents are federated to the consumer portal running on EP7.0 SPS21. Note the consumer was NOT patched during the weekend.
    On Monday morning, we get many calls from users that their HCM apps are not working on the consumer portal. The error can come in many different ways. The fix so far is to clear their IE cache and everything works again. Note that the problem doesn't happen to everybody, less than 10% of the user population. But the 10% is enough to flood our helpdesk with calls.
    I am not sure if any of you has run into this problem before. Is that a best practice to delete the IE cache from all the users after an SP upgrade? Any idea to see what caused the error?
    Thanks,
    Jonathan.

    Hi Jonathan,
    I have encountered a similar situation before but have unfortunately never got to the root cause of it. One thing I did notice was that browser versions tended to affect how the cache was handled for local users. We noticed that IE7 handled changes in the WDA apps much better than certain versions of IE6. Not sure if this is relevant in your scenario.
    I assume also that you are not using ACCAD or other WAN acceleration devices (as these have their own cache that can break on upgrades) and that you've cleared out your portal caches for good measure. As far as I know in ITS, if you've stopped and started the WDA services during the upgrade then the caching shouldn't be a problem.
    Cheers,
    E

  • Best practice to avoid Firefox cache collision

    Hi everybody.
    Reading through this page of Google Developers
    https://developers.google.com/speed/docs/best-practices/caching#LeverageBrowserCaching
    I found this line that I can't figure out the exact meaning:
    "avoid the Firefox hash collision issue by ensuring that your application generates URLs that differ on more than 8-character boundaries".
    Can you please explain this with practical examples ?
    For example, are this two strings different enough?
    "hello123hello"
    "hello456hello"
    or these
    "1aaaaaa1"
    "2aaaaaa2"
    Also, what version of firefox does have this problem?
    Thank you

    Personally, I would work out the application design before you consider performance settings. There are so many variables involved that to try to do it upfront is going to be difficult.
    That being said each developer has their own preferred approach and some will automatically add certain expressions into the Essbase.cfg file, set certain application level settings via EAS (Index Cache, Data Cache, Data File Cache).
    There are many posts discussing these topics in this forum so suggest you do a search and gather some opinions.
    Regards
    Stuart

  • Setting Disks/Caches/Vault for multiple projects - Best Practices

    Please confirm a couple assumptions for me:
    1. Because Scratch Disk, Cache and Autosave preferences are all contained in System Settings, I cannot choose different settings for different projects at the same time (i.e. I have to change the settings upon launch of a new project, if I desire a change).
    2. It is good practice to set the Video/Render Disks to an external drive, and keep the Cache and Autosave Vault set to the primary drive (e.g. user:Documents:FCP Documents). It is also best practice to save the Project File to your primary drive.
    And a question: I see that the Autosave Vault distinguishes between projects, and the Waveform Cache Files distinguishes between clips. But what happens in the Thumbnail Cache Files folder when you have more than one project targeting that folder? Does it lump it into the same file? Overwrite it? Is that something about which I should be concerned?
    Thanks!

    maxwell wrote:
    Please confirm a couple assumptions for me:
    1. Because Scratch Disk, Cache and Autosave preferences are all contained in System Settings, I cannot choose different settings for different projects at the same time (i.e. I have to change the settings upon launch of a new project, if I desire a change).
    Yes
    2. It is good practice to set the Video/Render Disks to an external drive, and keep the Cache and Autosave Vault set to the primary drive (e.g. user:Documents:FCP Documents).
    Yes
    It is also best practice to save the Project File to your primary drive.
    I don't. And I don't think it matters. But you should back that file up to some other drive (like Time Machine).
    And a question: I see that the Autosave Vault distinguishes between projects, and the Waveform Cache Files distinguishes between clips. But what happens in the Thumbnail Cache Files folder when you have more than one project targeting that folder? Does it lump it into the same file? Overwrite it? Is that something about which I should be concerned?
    I wouldn't worry about it.
    o| TOnyTOny |o

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • Is this a best practice of BAM implementation?

    Hello everyone:
    Currently we have done an Oracle BAM implementation. To explain briefly our implementation:
    We have an Oracle Database 8.1.7, were all transactions are recorded. We tried using JMS to import the data into data objects in the Oracle BAM respository. We did this by using a database link to a Oracle Database 10G and then through Advanced Queueing. This did not work due to performance issues. The AQ messages were not consumed as fast as they were produced, so there was no real time data.
    Then we developed a Java component to read the table in the Oracle Database 10g and started using batch upserts into the Oracle BAM through the web services API provided. This solved the performance issue mentioned above.
    Currently we are using all the data procesing in the Oracle 10G database through PL/SQL stored procedures, data mining is applied on the transactions and the summary information is collected intro several tables. This tables are updated and then imported into the Oracle BAM data objects.
    We have noticed, that Oracle BAM has some performance issues when trying to view a report based on a data object with large number of records. Is this really an issue on Oracle BAM? The average number of transactions is 200,000 records. How can we solve this issue?
    Another issue we want to expose is. When viewing reports through the browser, and the browser hangs or suddenly closes. Sometimes the Active Data Cached Feed window hangs or doesn´t close. When this happens, an we try to open another report, the report never displays. Is this a browser side issue or server side issue?
    The Oracle BAm is installed on a Blade with 2X2 Xeon Procesors (4 cpus), 16GB RAM and Windows Server 2003 Enterprise Ed. with SP2.
    How can we get a tuning guide based on best practices?
    Where can we get suggestions about our implementation?
    Thanks to anyone who can help us.

    Even i am facing similar issue. Any pointers would be appreciated.
    Thanks.

Maybe you are looking for

  • How to remove the silican ball from the audio jack

    how to remove the silican ball from the audio jack in DV6 6116 laptop

  • Update installation fails

    When installing the update for Dreamweaver MX 2004 I got the error message: the directory does not contain Dreamweaver even if the program automatically selected this correct directory (C:/Macromedia/Dreamweaver MX 2004 in my case) What to do? Pls re

  • COBRAS - any way to speed things up?

    I'm working on a customer system and will be using COBRAS to migrate user data this weekend.  There are three systems involved, the largest of which has some 8900 subscribers on it.  Running the backup of the system using COBRAS took 1 hour and 50 mi

  • Query dba_users, dba_role_privs, dba_sys_privs

    Hi Guys, I'm currently documenting a database users/role/sys privs in ms word document. My sql script will actually result in cartesian product making the documentation very huge in size. Any solution i can reduce the amount of duplicate roles or any

  • New to Hierarchy use in Bex

    Hi, I am new to using hierarchy in BEX query. I have built a hierarchy in BW7.0 where under the First Node, we have 3 nodes and then each of these 3 nodes, we have 4 nodes under like shown below; Node1--- Node11                       |---- Node111